Science.gov

Sample records for visual search patterns

  1. Emotional Devaluation of Distracting Patterns and Faces: A Consequence of Attentional Inhibition during Visual Search?

    ERIC Educational Resources Information Center

    Raymond, Jane E.; Fenske, Mark J.; Westoby, Nikki

    2005-01-01

    Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target…

  2. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  3. Visual search patterns in semantic dementia show paradoxical facilitation of binding processes

    PubMed Central

    Viskontas, Indre V.; Boxer, Adam L.; Fesenko, John; Matlin, Alisa; Heuer, Hilary W.; Mirsky, Jacob; Miller, Bruce L.

    2011-01-01

    While patients with Alzheimer’s disease (AD) show deficits in attention, manifested by inefficient performance on visual search, new visual talents can emerge in patients with frontotemporal lobar degeneration (FTLD), suggesting that, at least in some of the patients, visual attention is spared, if not enhanced. To investigate the underlying mechanisms for visual talent in FTLD (behavioral variant FTD [bvFTD] and semantic dementia [SD]) patients, we measured performance on a visual search paradigm that includes both feature and conjunction search, while simultaneously monitoring saccadic eye movements. AD patients were impaired relative to healthy controls (NC) and FTLD patients on both feature and conjunction search. BvFTD patients showed less accurate performance only on the conjunction search task, but slower response times than NC on all three tasks. In contrast, SD patients were as accurate as controls and had faster response times when faced with the largest number of distracters in the conjunction search task. Measurement of saccades during visual search showed that AD patients explored more of the image, whereas SD patients explored less of the image before making a decision as to whether the target was present. Performance on the conjunction search task positively correlated with gray matter volume in the superior parietal lobe, precuneus, middle frontal gyrus and superior temporal gyrus. These data suggest that despite the presence of extensive temporal lobe degeneration, visual talent in SD may be facilitated by more efficient visual search under distracting conditions due to enhanced function in the dorsal frontoparietal attention network. PMID:21215762

  4. The visual search patterns and hazard responses of experienced and inexperienced motorcycle riders.

    PubMed

    Hosking, Simon G; Liu, Charles C; Bayly, Megan

    2010-01-01

    Hazard perception is a critical skill for road users. In this study, an open-loop motorcycle simulator was used to examine the effects of motorcycle riding and car driving experience on hazard perception and visual scanning patterns. Three groups of participants were tested: experienced motorcycle riders who were experienced drivers (EM-ED), inexperienced riders/experienced drivers (IM-ED), and inexperienced riders/inexperienced drivers (IM-ID). Participants were asked to search for hazards in simulated scenarios, and click a response button when a hazard was identified. The results revealed a significant monotonic decrease in hazard response times as experience increased from IM-ID to IM-ED to EM-ED. Compared to the IM-ID group, both the EM-ED and IM-ED groups exhibited more flexible visual scanning patterns that were sensitive to the presence of hazards. These results point to the potential benefit of training hazard perception and visual scanning in motorcycle riders, as has been successfully demonstrated in previous studies with car drivers. PMID:19887160

  5. Collaboration during visual search.

    PubMed

    Malcolmson, Kelly A; Reynolds, Michael G; Smilek, Daniel

    2007-08-01

    Two experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. We compared the search performance of individuals working together (collaborative pairs) with the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed even when the presence of another individual was matched across pairs. The results are discussed in the context of task-sharing, social loafing and current theories of visual search. PMID:17972737

  6. Reconsidering Visual Search

    PubMed Central

    2015-01-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  7. Parallel Processing in Visual Search Asymmetry

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2004-01-01

    The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

  8. Searching for inefficiency in visual search.

    PubMed

    Christie, Gregory J; Livingstone, Ashley C; McDonald, John J

    2015-01-01

    The time required to find an object of interest in the visual field often increases as a function of the number of items present. This increase or inefficiency was originally interpreted as evidence for the serial allocation of attention to potential target items, but controversy has ensued for decades. We investigated this issue by recording ERPs from humans searching for a target in displays containing several differently colored items. Search inefficiency was ascribed not to serial search but to the time required to selectively process the target once found. Additionally, less time was required for the target to "pop out" from the rest of the display when the color of the target repeated across trials. These findings indicate that task relevance can cause otherwise inconspicuous items to pop out and highlight the need for direct neurophysiological measures when investigating the causes of search inefficiency. PMID:25203277

  9. Visual Search and Reading.

    ERIC Educational Resources Information Center

    Calfee, Robert C.; Jameson, Penny

    The effect on reading speed of the number of target items being searched for and the number of target occurrences in the text was examined. The subjects, 24 college undergraduate volunteers, were presented with a list of target words, and then they read a passage for comprehension which contained occurrences of the target words (Experiment1) or…

  10. Visual Search of Mooney Faces

    PubMed Central

    Goold, Jessica E.; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  11. Visual similarity effects in categorical search.

    PubMed

    Alexander, Robert G; Zelinsky, Gregory J

    2011-01-01

    We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. PMID:21757505

  12. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  13. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  14. Statistical templates for visual search.

    PubMed

    Ackermann, John F; Landy, Michael S

    2014-01-01

    How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets. PMID:24627458

  15. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  16. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  17. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  18. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  19. Cumulative Intertrial Inhibition in Repeated Visual Search

    ERIC Educational Resources Information Center

    Takeda, Yuji

    2007-01-01

    In the present study the author examined visual search when the items remain visible across trials but the location of the target varies. Reaction times for inefficient search cumulatively increased with increasing numbers of repeated search trials, suggesting that inhibition for distractors carried over successive trials. This intertrial…

  20. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  1. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  2. Words, shape, visual search and visual working memory in 3-year-old children

    PubMed Central

    Vales, Catarina; Smith, Linda B.

    2014-01-01

    Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  3. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  4. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  5. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  6. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  7. Superior Visual Search in Adults with Autism

    ERIC Educational Resources Information Center

    O'Riordan, Michelle

    2004-01-01

    Recent studies have suggested that children with autism perform better than matched controls on visual search tasks and that this stems from a superior visual discrimination ability. This study assessed whether these findings generalize from children to adults with autism. Experiments 1 and 2 showed that, like children, adults with autism were…

  8. Perceptual Encoding Efficiency in Visual Search

    ERIC Educational Resources Information Center

    Rauschenberger, Robert; Yantis, Steven

    2006-01-01

    The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

  9. The Search for Optimal Visual Stimuli

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    In 1983, Watson, Barlow and Robson published a brief report in which they explored the relative visibility of targets that varied in size, shape, spatial frequency, speed, and duration (referred to subsequently here as WBR). A novel aspect of that paper was that visibility was quantified in terms of threshold contrast energy, rather than contrast. As they noted, this provides a more direct measure of the efficiency with which various patterns are detected, and may be more edifying as to the underlying detection machinery. For example, under certain simple assumptions, the waveform of the most efficiently detected signal is an estimate of the receptive field of the visual system's most efficient detector. Thus one goal of their experiment Basuto search for the stimulus that the 'eye sees best'. Parenthetically, the search for optimal stimuli may be seen as the most general and sophisticated variant of the traditional 'subthreshold summation' experiment, in which one measures the effect upon visibility of small probes combined with a base stimulus.

  10. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  11. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  12. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  13. Unsupervised Learning for Visual Pattern Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of topics and major concepts in unsupervised learning for visual pattern analysis. Cluster analysis and dimensionality are two important topics in unsupervised learning. Clustering relates to the grouping of similar objects in visual perception, while dimensionality reduction is essential for the compact representation of visual patterns. In this chapter, we focus on clustering techniques, offering first a theoretical basis, then a look at some applications in visual pattern analysis. With respect to the former, we introduce both concepts and algorithms. With respect to the latter, we discuss visual perceptual grouping. In particular, the problem of image segmentation is discussed in terms of contour and region grouping. Finally, we present a brief introduction to learning visual pattern representations, which serves as a prelude to the following chapters.

  14. Visual reinforcement shapes eye movements in visual search.

    PubMed

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task. PMID:27559719

  15. Visual search under scotopic lighting conditions.

    PubMed

    Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M; Geisler, Wilson S; Gegenfurtner, Karl R

    2015-08-01

    When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases. PMID:25988753

  16. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  17. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  18. Dynamic Prototypicality Effects in Visual Search

    ERIC Educational Resources Information Center

    Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan

    2011-01-01

    In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…

  19. Homo economicus in visual search.

    PubMed

    Navalpakkam, Vidhya; Koch, Christof; Perona, Pietro

    2009-01-01

    How do reward outcomes affect early visual performance? Previous studies found a suboptimal influence, but they ignored the non-linearity in how subjects perceived the reward outcomes. In contrast, we find that when the non-linearity is accounted for, humans behave optimally and maximize expected reward. Our subjects were asked to detect the presence of a familiar target object in a cluttered scene. They were rewarded according to their performance. We systematically varied the target frequency and the reward/penalty policy for detecting/missing the targets. We find that 1) decreasing the target frequency will decrease the detection rates, in accordance with the literature. 2) Contrary to previous studies, increasing the target detection rewards will compensate for target rarity and restore detection performance. 3) A quantitative model based on reward maximization accurately predicts human detection behavior in all target frequency and reward conditions; thus, reward schemes can be designed to obtain desired detection rates for rare targets. 4) Subjects quickly learn the optimal decision strategy; we propose a neurally plausible model that exhibits the same properties. Potential applications include designing reward schemes to improve detection of life-critical, rare targets (e.g., cancers in medical images). PMID:19271901

  20. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  1. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  2. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  3. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  4. Coarse guidance by numerosity in visual search.

    PubMed

    Reijnen, Ester; Wolfe, Jeremy M; Krummenacher, Joseph

    2013-01-01

    In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and "checkerboards" in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli. PMID:23070885

  5. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  6. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  7. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  8. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  9. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  10. Parallel Mechanisms for Visual Search in Zebrafish

    PubMed Central

    Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

    2014-01-01

    Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

  11. Cardiac and Respiratory Responses During Visual Search in Nonretarded Children and Retarded Adolescents

    ERIC Educational Resources Information Center

    Porges, Stephen W.; Humphrey, Mary M.

    1977-01-01

    The relationship between physiological response patterns and mental competence was investigated by evaluating heart rate and respiratory responses during a sustained visual-search task in 29 nonretarded grade school children and 16 retarded adolescents. (Author)

  12. Configural learning in contextual cuing of visual search.

    PubMed

    Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R

    2016-08-01

    Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record PMID:26913779

  13. A Visual Search Tool for Early Elementary Science Students.

    ERIC Educational Resources Information Center

    Revelle, Glenda; Druin, Allison; Platner, Michele; Bederson, Ben; Hourcade, Juan Pablo; Sherman, Lisa

    2002-01-01

    Reports on the development of a visual search interface called "SearchKids" to support children ages 5-10 years in their efforts to find animals in a hierarchical information structure. Investigates whether children can construct search queries to conduct complex searches if sufficiently supported both visually and conceptually. (Contains 27…

  14. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  15. Race Guides Attention in Visual Search.

    PubMed

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  16. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  17. An active visual search interface for Medline.

    PubMed

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan

    2007-01-01

    Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz. PMID:17951838

  18. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology. PMID:19004680

  19. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines. PMID:26356887

  20. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  1. Visual search performance by paranoid and chronic undifferentiated schizophrenics.

    PubMed

    Portnoff, L A; Yesavage, J A; Acker, M B

    1981-10-01

    Disturbances in attention are among the most frequent cognitive abnormalities in schizophrenia. Recent research has suggested that some schizophrenics have difficulty with visual tracking, which is suggestive of attentional deficits. To investigate differential visual-search performance by schizophrenics, 15 chronic undifferentiated and 15 paranoid schizophrenics were compared with 15 normals on two tests measuring visual search in a systematic and an unsystematic stimulus mode. Chronic schizophrenics showed difficulty with both kinds of visual-search tasks. In contrast, paranoids had only a deficit in the systematic visual-search task. Their ability for visual search in an unsystematized stimulus array was equivalent to that of normals. Although replication and cross-validation is needed to confirm these findings, it appears that the two tests of visual search may provide a useful ancillary method for differential diagnosis between these two types of schizophrenia. PMID:7312527

  2. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  3. Activation of phonological competitors in visual search.

    PubMed

    Görges, Frauke; Oppermann, Frank; Jescheniak, Jörg D; Schriefers, Herbert

    2013-06-01

    Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed. PMID:23584102

  4. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  5. Pattern visual evoked potentials in hyperthyroidism.

    PubMed Central

    Mitchell, K W; Wood, C M; Howe, J W

    1988-01-01

    Pattern reversal visual evoked potentials (VEPs) have been elicited in 16 female hyperthyroid patients before and after treatment and compared with those from a similar group of age and sex matched control subjects. No effect on latency was seen, and although larger amplitude values were noted in the thyrotoxic group these too were not significant. We would conclude that hyperthyroidism per se has little effect on the pattern reversal VEP, and any observed effect on these potentials is probably due to other factors. PMID:3415945

  6. Signatures of chaos in animal search patterns

    PubMed Central

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  7. Signatures of chaos in animal search patterns.

    PubMed

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  8. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  9. Visual search and eye movements in novel and familiar contexts

    NASA Astrophysics Data System (ADS)

    McDermott, Kyle; Mulligan, Jeffrey B.; Bebis, George; Webster, Michael A.

    2006-02-01

    Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.

  10. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  11. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  12. The Time Course of Similarity Effects in Visual Search

    ERIC Educational Resources Information Center

    Guest, Duncan; Lamberts, Koen

    2011-01-01

    It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks…

  13. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  14. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  15. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  16. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  17. Usage Patterns of an Online Search System.

    ERIC Educational Resources Information Center

    Cooper, Michael D.

    1983-01-01

    Examines usage patterns of ELHILL retrieval program of National Library of Medicine's MEDLARS system. Based on sample of 6,759 searches, the study analyzes frequency of various commands, classifies messages issued by system, and investigates searcher error rates. Suggestions for redesigning program and query language are noted. Seven references…

  18. Online search patterns: NLM CATLINE database.

    PubMed

    Tolle, J E; Hah, S

    1985-03-01

    In this article the authors present their analysis of the online search patterns within user searching sessions of the National Library of Medicine ELHILL system and examine the user search patterns on the CATLINE database. In addition to the CATLINE analysis, a comparison is made using data previously analyzed on the MEDLINE database for the same time period, thus offering an opportunity to compare the performance parameters of different databases within the same information system. Data collection covers eight weeks and includes 441,282 transactions and over 11,067 user sessions, which accounted for 1680 hours of system usage. The descriptive analysis contained in this report can assists system design activities, while the predictive power of the transaction log analysis methodology may assists the development of real-time aids. PMID:10300015

  19. Competing Distractors Facilitate Visual Search in Heterogeneous Displays

    PubMed Central

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments. PMID:27508298

  20. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

  1. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  2. Searching for intellectual turning points: Progressive knowledge domain visualization

    PubMed Central

    Chen, Chaomei

    2004-01-01

    This article introduces a previously undescribed method progressively visualizing the evolution of a knowledge domain's cocitation network. The method first derives a sequence of cocitation networks from a series of equal-length time interval slices. These time-registered networks are merged and visualized in a panoramic view in such a way that intellectually significant articles can be identified based on their visually salient features. The method is applied to a cocitation study of the superstring field in theoretical physics. The study focuses on the search of articles that triggered two superstring revolutions. Visually salient nodes in the panoramic view are identified, and the nature of their intellectual contributions is validated by leading scientists in the field. The analysis has demonstrated that a search for intellectual turning points can be narrowed down to visually salient nodes in the visualized network. The method provides a promising way to simplify otherwise cognitively demanding tasks to a search for landmarks, pivots, and hubs. PMID:14724295

  3. The Roles of Non-retinotopic Motions in Visual Search

    PubMed Central

    Nakayama, Ryohei; Motoyoshi, Isamu; Sato, Takao

    2016-01-01

    In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search. PMID:27313560

  4. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  5. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task.

    PubMed

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-07-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  6. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  7. Visual pattern recognition in Drosophila is invariant for retinal position.

    PubMed

    Tang, Shiming; Wolf, Reinhard; Xu, Shuping; Heisenberg, Martin

    2004-08-13

    Vision relies on constancy mechanisms. Yet, these are little understood, because they are difficult to investigate in freely moving organisms. One such mechanism, translation invariance, enables organisms to recognize visual patterns independent of the region of their visual field where they had originally seen them. Tethered flies (Drosophila melanogaster) in a flight simulator can recognize visual patterns. Because their eyes are fixed in space and patterns can be displayed in defined parts of their visual field, they can be tested for translation invariance. Here, we show that flies recognize patterns at retinal positions where the patterns had not been presented before. PMID:15310908

  8. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  9. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  10. Do People Take Stimulus Correlations into Account in Visual Search?

    PubMed Central

    Bhardwaj, Manisha; van den Berg, Ronald

    2016-01-01

    In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks. PMID:26963498

  11. 'Where' and 'what' in visual search.

    PubMed

    Atkinson, J; Braddick, O J

    1989-01-01

    A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them. PMID:2771603

  12. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

  13. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  14. Why Is Visual Search Superior in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.

    2009-01-01

    This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…

  15. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  16. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  17. Visual Search Asymmetry with Uncertain Targets

    ERIC Educational Resources Information Center

    Saiki, Jun; Koike, Takahiko; Takahashi, Kohske; Inoue, Tomoko

    2005-01-01

    The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of…

  18. Individual Differences and Metacognitive Knowledge of Visual Search Strategy

    PubMed Central

    Proulx, Michael J.

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions. PMID:22066030

  19. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  20. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  1. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.

    PubMed

    Leff, Daniel R; James, David R C; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  2. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  3. Visual Search and the Collapse of Categorization

    ERIC Educational Resources Information Center

    David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

    2005-01-01

    Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

  4. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  5. Visual search and attention to faces during early infancy.

    PubMed

    Frank, Michael C; Amso, Dima; Johnson, Scott P

    2014-02-01

    Newborn babies look preferentially at faces and face-like displays, yet over the course of their first year much changes about both the way infants process visual stimuli and how they allocate their attention to the social world. Despite this initial preference for faces in restricted contexts, the amount that infants look at faces increases considerably during the first year. Is this development related to changes in attentional orienting abilities? We explored this possibility by showing 3-, 6-, and 9-month-olds engaging animated and live-action videos of social stimuli and also measuring their visual search performance with both moving and static search displays. Replicating previous findings, looking at faces increased with age; in addition, the amount of looking at faces was strongly related to the youngest infants' performance in visual search. These results suggest that infants' attentional abilities may be an important factor in facilitating their social attention early in development. PMID:24211654

  6. Size Scaling in Visual Pattern Recognition

    ERIC Educational Resources Information Center

    Larsen, Axel; Bundesen, Claus

    1978-01-01

    Human visual recognition on the basis of shape but regardless of size was investigated by reaction time methods. Results suggested two processes of size scaling: mental-image transformation and perceptual-scale transformation. Image transformation accounted for matching performance based on visual short-term memory, whereas scale transformation…

  7. Audio-visual stimulation improves oculomotor patterns in patients with hemianopia.

    PubMed

    Passamonti, Claudia; Bertini, Caterina; Làdavas, Elisabetta

    2009-01-01

    Patients with visual field disorders often exhibit impairments in visual exploration and a typical defective oculomotor scanning behaviour. Recent evidence [Bolognini, N., Rasi, F., Coccia, M., & Làdavas, E. (2005b). Visual search improvement in hemianopic patients after audio-visual stimulation. Brain, 128, 2830-2842] suggests that systematic audio-visual stimulation of the blind hemifield can improve accuracy and search times in visual exploration, probably due to the stimulation of Superior Colliculus (SC), an important multisensory structure involved in both the initiation and execution of saccades. The aim of the present study is to verify this hypothesis by studying the effects of multisensory training on oculomotor scanning behaviour. Oculomotor responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of 12 patients with chronic visual field defects and 12 controls subjects. Eye movements were recorded using an infra-red technique which measured a range of spatial and temporal variables. Prior to treatment, patients' performance was significantly different from that of controls in relation to fixations and saccade parameters; after Audio-Visual Training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Overall, these improvements led to a reduction of total exploration time. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in both left- and right-hemianopia readers. Our findings provide evidence that Audio-Visual Training, by stimulating the SC, may induce a more organized pattern of visual exploration due to an implementation of efficient oculomotor strategies. Interestingly, the improvement was found to be stable at a 1 year follow-up control session, indicating a long

  8. Group-level differences in visual search asymmetry.

    PubMed

    Cramer, Emily S; Dusko, Michelle J; Rensink, Ronald A

    2016-08-01

    East Asians and Westerners differ in various aspects of perception and cognition. For example, visual memory for East Asians is believed to be more influenced by the contextual aspects of a scene than is the case for Westerners (Masuda & Nisbett in Journal of Personality and Social Psychology, 81, 922-934, 2001). There are also differences in visual search: For Westerners, search is faster for a long line among short ones than for a short line among long ones, whereas this difference does not appear to hold for East Asians (Ueda et al., 2016). However, it is unclear how these group-level differences originate. To investigate the extent to which they depend upon environment, we tested visual search and visual memory in East Asian immigrants who had lived in Canada for different amounts of time. Recent immigrants were found to exhibit no search asymmetry, unlike Westerners who had spent their lives in Canada. However, immigrants who had lived in Canada for more than 2 years showed performance comparable to that of Westerners. These differences could not be explained by the general analytic/holistic processing distinction believed to differentiate Westerners and East Asians, since all observers showed a strong holistic tendency for visual recognition. The results instead support the suggestion that exposure to a new environment can significantly affect the particular processes used to perceive a given stimulus. PMID:27270735

  9. Learned face-voice pairings facilitate visual search

    PubMed Central

    Zweig, L. Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2014-01-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  10. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  11. Losing the trees for the forest in dynamic visual search.

    PubMed

    Jardine, Nicole L; Moore, Cathleen M

    2016-05-01

    Representing temporally continuous objects across change (e.g., in position) requires integration of newly sampled visual information with existing object representations. We asked what consequences representational updating has for visual search. In this dynamic visual search task, bars rotated around their central axis. Observers searched for a single episodic target state (oblique bar among vertical and horizontal bars). Search was efficient when the target display was presented as an isolated static display. Performance declined to near chance, however, when the same display was a single state of a dynamically changing scene (Experiment 1), as though temporal selection of the target display from the stream of stimulation failed entirely (Experiment 3). The deficit is attributable neither to masking (Experiment 2), nor to a lack of temporal marker for the target display (Experiment 4). The deficit was partially reduced by visually marking the target display with unique feature information (Experiment 5). We suggest that representational updating causes a loss of access to instantaneous state information in search. Similar to spatially crowded displays that are perceived as textures (Parkes, Lund, Angelucci, Solomon, & Morgan, 2001), we propose a temporal version of the trees (instantaneous orientation information) being lost for the forest (rotating bars). (PsycINFO Database Record PMID:26689307

  12. Visual exploratory search of relationship graphs on smartphones.

    PubMed

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones' user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users' capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  13. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  14. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  15. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  16. The effect of a visual indicator on rate of visual search Evidence for processing control

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    Search rates were estimated from response latencies in a visual search task of the type used by Atkinson et al. (1969), in which a subject searches a small set of letters to determine the presence or absence of a predesignated target. Half of the visual displays contained a marker above one of the letters. The marked letter was the only one that had to be checked to determine whether or not the display contained the target. The presence of a marker in a display significantly increased the estimated rate of search, but the data clearly indicated that subjects did not restrict processing to the marked item. Letters in the vicinity of the marker were also processed. These results were interpreted as showing that subjects are able to exercise some degree of control over the search process in this type of task.

  17. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  18. Comparing target detection errors in visual search and manually-assisted search.

    PubMed

    Solman, Grayden J F; Hickey, Kersondra; Smilek, Daniel

    2014-05-01

    Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. PMID:24554230

  19. Attention Capacity and Task Difficulty in Visual Search

    ERIC Educational Resources Information Center

    Huang, Liqiang; Pashler, Harold

    2005-01-01

    When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of…

  20. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  1. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    Energy Science and Technology Software Center (ESTSC)

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  2. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  3. Bumblebee visual search for multiple learned target types.

    PubMed

    Nityananda, Vivek; Pattrick, Jonathan G

    2013-11-15

    Visual search is well studied in human psychology, but we know comparatively little about similar capacities in non-human animals. It is sometimes assumed that animal visual search is restricted to a single target at a time. In bees, for example, this limitation has been evoked to explain flower constancy, the tendency of bees to specialise on a single flower type. Few studies, however, have investigated bee visual search for multiple target types after extended learning and controlling for prior visual experience. We trained colour-naive bumblebees (Bombus terrestris) extensively in separate discrimination tasks to recognise two rewarding colours in interspersed block training sessions. We then tested them with the two colours simultaneously in the presence of distracting colours to examine whether and how quickly they were able to switch between the target colours. We found that bees switched between visual targets quickly and often. The median time taken to switch between targets was shorter than known estimates of how long traces last in bees' working memory, suggesting that their capacity to recall more than one learned target was not restricted by working memory limitations. Following our results, we propose a model of memory and learning that integrates our findings with those of previous studies investigating flower constancy. PMID:23948481

  4. Is pop-out visual search attentive or preattentive? Yes!

    PubMed

    Lagroix, Hayley E P; Di Lollo, Vincent; Spalek, Thomas M

    2015-04-01

    Is the efficiency of "pop-out" visual search impaired when attention is preempted by another task? This question has been raised in earlier experiments but has not received a satisfactory answer. To constrain the availability of attention, those experiments employed an attentional blink (AB) paradigm in which report of the second of 2 targets (T2) is impaired when it is presented shortly after the first (T1). In those experiments, T2 was a pop-out search display that remained on view until response. The main finding was that search efficiency, as indexed by the slope of the search function, was not impaired during the period of the AB. With such long displays, however, the search could be postponed until T1 had been processed, thus allowing the task to be performed with full attention. That pitfall was avoided in the present Experiment 1 by presenting the search array either until response (thus allowing a postponement strategy) or very briefly (making that strategy ineffectual). Level of performance was impaired during the period of the AB, but search efficiency was unimpaired even when the display was brief. Experiment 2 showed that visual search is indeed postponed during the period of the AB, when the array remains on view until response. These findings reveal the action of at least 2 separable mechanisms, indexed by level and efficiency of pop-out search, which are affected in different ways by the availability of attention. The Guided Search 4.0 model can account for the results in both level and efficiency. PMID:25706768

  5. Attention during visual search: The benefit of bilingualism

    PubMed Central

    Friesen, Deanna C; Latman, Vered; Calvo, Alejandra; Bialystok, Ellen

    2015-01-01

    Aims and Objectives/Purpose/Research Questions Following reports showing bilingual advantages in executive control (EC) performance, the current study investigated the role of selective attention as a foundational skill that might underlie these advantages. Design/Methodology/Approach Bilingual and monolingual young adults performed a visual search task by determining whether a target shape was present amid distractor shapes. Task difficulty was manipulated by search type (feature or conjunction) and by the number and discriminability of the distractors. In feature searches, the target (e.g., green triangle) differed on a single dimension (e.g., color) from the distractors (e.g., yellow triangles); in conjunction searches, two types of distractors (e.g., pink circles and turquoise squares) each differed from the target (e.g., turquoise circle) on a single but different dimension (e.g., color or shape). Data and Analysis Reaction time and accuracy data from 109 young adults (53 monolinguals and 56 bilinguals) were analyzed using a repeated-measures analysis of variance. Group membership, search type, number and discriminability of distractors were the independent variables. Findings/Conclusions Participants identified the target more quickly in the feature searches, when the target was highly discriminable from the distractors and when there were fewer distractors. Importantly, although monolinguals and bilinguals performed equivalently on the feature searches, bilinguals were significantly faster than monolinguals in identifying the target in the more difficult conjunction search, providing evidence for better control of visual attention in bilinguals Originality Unlike previous studies on bilingual visual attention, the current study found a bilingual attention advantage in a paradigm that did not include a Stroop-like manipulation to set up false expectations. Significance/Implications Thus, our findings indicate that the need to resolve explicit conflict or

  6. Irrelevant objects of expertise compete with faces during visual search

    PubMed Central

    McGugin, Rankin W.; McKeeff, Thomas J.; Tong, Frank; Gauthier, Isabel

    2010-01-01

    Prior work suggests that non-face objects of expertise can interfere with the perception of faces when the two categories are alternately presented, suggesting competition for shared perceptual resources. Here we ask whether task-irrelevant distractors from a category of expertise compete when faces are presented in a standard visual search task. Participants searched for a target (face or sofa) in an array containing both relevant and irrelevant distractors. The number of distractors from the target category (face or sofa) remained constant, while the number of distractors from the irrelevant category (cars) varied. Search slopes, calculated as a function of the number of irrelevant cars, were correlated with car expertise. The effect was not due to car distractors grabbing attention because they did not compete with sofa targets. Objects of expertise interfere with face perception even when they are task irrelevant, visually distinct and separated in space from faces. PMID:21264705

  7. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search

    PubMed Central

    Müller, Notger G.; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  8. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search.

    PubMed

    Müller, Notger G; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  9. The Mechanisms Underlying the ASD Advantage in Visual Search.

    PubMed

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470

  10. LASAGNA-Search: an integrated web tool for transcription factor binding site search and visualization.

    PubMed

    Lee, Chic; Huang, Chun-Hsi

    2013-03-01

    The release of ChIP-seq data from the ENCyclopedia Of DNA Elements (ENCODE) and Model Organism ENCyclopedia Of DNA Elements (modENCODE) projects has significantly increased the amount of transcription factor (TF) binding affinity information available to researchers. However, scientists still routinely use TF binding site (TFBS) search tools to scan unannotated sequences for TFBSs, particularly when searching for lesser-known TFs or TFs in organisms for which ChIP-seq data are unavailable. The sequence analysis often involves multiple steps such as TF model collection, promoter sequence retrieval, and visualization; thus, several different tools are required. We have developed a novel integrated web tool named LASAGNA-Search that allows users to perform TFBS searches without leaving the web site. LASAGNA-Search uses the LASAGNA (Length-Aware Site Alignment Guided by Nucleotide Association) algorithm for TFBS alignment. Important features of LASAGNA-Search include (i) acceptance of unaligned variable-length TFBSs, (ii) a collection of 1726 TF models, (iii) automatic promoter sequence retrieval, (iv) visualization in the UCSC Genome Browser, and (v) gene regulatory network inference and visualization based on binding specificities. LASAGNA-Search is freely available at http://biogrid.engr.uconn.edu/lasagna_search/. PMID:23599922

  11. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PMID:23527503

  12. How do Interruptions Impact Nurses’ Visual Scanning Patterns When Using Barcode Medication Administration Systems?

    PubMed Central

    He, Ze; Marquard, Jenna L.; Henneman, Philip L.

    2014-01-01

    While barcode medication administration (BCMA) systems have the potential to reduce medication errors, they may introduce errors, side effects, and hazards into the medication administration process. Studies of BCMA systems should therefore consider the interrelated nature of health information technology (IT) use and sociotechnical systems. We aimed to understand how the introduction of interruptions into the BCMA process impacts nurses’ visual scanning patterns, a proxy for one component of cognitive processing. We used an eye tracker to record nurses’ visual scanning patterns while administering a medication using BCMA. Nurses either performed the BCMA process in a controlled setting with no interruptions (n=25) or in a real clinical setting with interruptions (n=21). By comparing the visual scanning patterns between the two groups, we found that nurses in the interruptive environment identified less task-related information in a given period of time, and engaged in more information searching than information processing. PMID:25954449

  13. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  14. Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2009-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

  15. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  16. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  17. Visual search and the N2pc in children.

    PubMed

    Couperus, Jane W; Quirk, Colin

    2015-04-01

    While there is growing understanding of visual selective attention in children, some aspects such as selection in the presence of distractors are not well understood. Adult studies suggest that when presented with a visual search task, an enhanced negativity is seen beginning around 200 ms (the N2pc) that reflects selection of a target item among distractors. However, it is not known if similar selective attention-related activity is seen in children during visual search. This study was designed to investigate the presence of the N2pc in children. Nineteen children (ages 9-12 years) and 21 adults (ages 18-22 years) completed a visual search task in which they were asked to attend to a fixation surrounded by both a target and a distractor stimulus. Three types of displays were analyzed at parietal electrodes P7 and P8; lateral target/lateral distractor, lateral target/midline distractor, and midline target/lateral distractor. Both adults and children showed a significant increased negativity contralateral compared to ipsilateral to the target (reflected in the N2pc) in both displays with a lateral target while no such effect was seen in displays with a midline target. This suggests that children also utilized additional resources to select a target item when distractors are present. These findings demonstrate that the N2pc can be used as a marker of attentional object selection in children. PMID:25678274

  18. Animation of orthogonal texture patterns for vector field visualization.

    PubMed

    Bachthaler, Sven; Weiskopf, Daniel

    2008-01-01

    This paper introduces orthogonal vector field visualization on 2D manifolds: a representation by lines that are perpendicular to the input vector field. Line patterns are generated by line integral convolution (LIC). This visualization is combined with animation based on motion along the vector field. This decoupling of the line direction from the direction of animation allows us to choose the spatial frequencies along the direction of motion independently from the length scales along the LIC line patterns. Vision research indicates that local motion detectors are tuned to certain spatial frequencies of textures, and the above decoupling enables us to generate spatial frequencies optimized for motion perception. Furthermore, we introduce a combined visualization that employs orthogonal LIC patterns together with conventional, tangential streamline LIC patterns in order to benefit from the advantages of these two visualization approaches. In addition, a filtering process is described to achieve a consistent and temporally coherent animation of orthogonal vector field visualization. Different filter kernels and filter methods are compared and discussed in terms of visualization quality and speed. We present respective visualization algorithms for 2D planar vector fields and tangential vector fields on curved surfaces, and demonstrate that those algorithms lend themselves to efficient and interactive GPU implementations. PMID:18467751

  19. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  20. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  1. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention. PMID:27055458

  2. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  3. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  4. Impact of patient photos on visual search during radiograph interpretation

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Applegate, Kimberly; DeSimone, Ariadne; Chung, Alex; Tridandanpani, Srini

    2016-03-01

    To increase detection of mislabeled medical imaging studies evidence shows it may be useful to include patient photographs during interpretation. This study examined how inclusion of photos impacts visual search. Ten radiologists viewed 21 chest radiographs with and without a photo of the patient while search was recorded. Their task was to note tube/line placement. Eye-tracking data revealed that presence of the photo reduced the number of fixations and total dwell on the chest image as a result of periodically looking at the photo. Average preference for having photos was 6.10 on 0-10 scale and neck and chest were preferred areas.

  5. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  6. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  7. Memory for Where, but Not What, Is Used during Visual Search

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Peterson, Matthew S.; Vomela, Miroslava

    2006-01-01

    Although the role of memory in visual search is debatable, most researchers agree with a limited-capacity model of memory in visual search. The authors demonstrate the role of memory by replicating previous findings showing that visual search is biased away from old items (previously examined items) and toward new items (nonexamined items).…

  8. Similarity preserving snippet-based visualization of web search results.

    PubMed

    Gomez-Nieto, Erick; San Roman, Frizzi; Pagliosa, Paulo; Casaca, Wallace; Helou, Elias S; de Oliveira, Maria Cristina F; Nonato, Luis Gustavo

    2014-03-01

    Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query--for example, it may be too general, or ambiguous, or ill expressed--the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a "bag-of-words" vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global

  9. Functional Connectivity Patterns of Visual Cortex Reflect its Anatomical Organization.

    PubMed

    Genç, Erhan; Schölvinck, Marieke Louise; Bergmann, Johanna; Singer, Wolf; Kohler, Axel

    2016-09-01

    The brain is continuously active, even without external input or task demands. This so-called resting-state activity exhibits a highly specific spatio-temporal organization. However, how exactly these activity patterns map onto the anatomical and functional architecture of the brain is still unclear. We addressed this question in the human visual cortex. We determined the representation of the visual field in visual cortical areas of 44 subjects using fMRI and examined resting-state correlations between these areas along the visual hierarchy, their dorsal and ventral segments, and between subregions representing foveal versus peripheral parts of the visual field. We found that retinotopically corresponding regions, particularly those representing peripheral visual fields, exhibit strong correlations. V1 displayed strong internal correlations between its dorsal and ventral segments and the highest correlation with LGN compared with other visual areas. In contrast, V2 and V3 showed weaker correlations with LGN and stronger between-area correlations, as well as with V4 and hMT+. Interhemispheric correlations between homologous areas were especially strong. These correlation patterns were robust over time and only marginally altered under task conditions. These results indicate that resting-state fMRI activity closely reflects the anatomical organization of the visual cortex both with respect to retinotopy and hierarchy. PMID:26271111

  10. Reading and Visual Search: A Developmental Study in Normal Children

    PubMed Central

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

  11. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  12. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics. PMID:17546747

  13. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  14. Automatic guidance of attention during real-world visual search

    PubMed Central

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  15. Point-of-gaze analysis reveals visual search strategies

    NASA Astrophysics Data System (ADS)

    Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.

    2004-06-01

    Seemingly complex tasks like visual search can be analyzed using a cognition-free, bottom-up framework. We sought to reveal strategies used by observers in visual search tasks using accurate eye tracking and image analysis at point of gaze. Observers were instructed to search for simple geometric targets embedded in 1/f noise. By analyzing the stimulus at the point of gaze using the classification image (CI) paradigm, we discovered CI templates that indeed resembled the target. No such structure emerged for a random-searcher. We demonstrate, qualitatively and quantitatively, that these CI templates are useful in predicting stimulus regions that draw human fixations in search tasks. Filtering a 1/f noise stimulus with a CI results in a 'fixation prediction map'. A qualitative evaluation of the prediction was obtained by overlaying k-means clusters of observers' fixations on the prediction map. The fixations clustered around the local maxima in the prediction map. To obtain a quantitative comparison, we computed the Kullback-Leibler distance between the recorded fixations and the prediction. Using random-searcher CIs in Monte Carlo simulations, a distribution of this distance was obtained. The z-scores for the human CIs and the original target were -9.70 and -9.37 respectively indicating that even in noisy stimuli, observers deploy their fixations efficiently to likely targets rather than casting them randomly hoping to fortuitously find the target.

  16. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. PMID:26529685

  17. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

  18. Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG.

    PubMed

    Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A

    2016-05-15

    Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity. PMID:26899210

  19. Differences between fovea and parafovea in visual search processes.

    PubMed

    Fiorentini, A

    1989-01-01

    Visual objects that differ from the surroundings for some simple feature, e.g. colour or line orientation, or for some shape parameters ("textons", Julez, 1986) are believed to be detected in parallel from different locations in the visual field without requiring a serial search process. Tachistoscopic presentations of textures were used to compare the time course of search processes in the fovea and parafovea. Detection of targets differing for a simple feature (line orientation or line crossings) from the surrounding elements was found to have a time course typical of parallel processing for coarse textures extending into the parafovea. For fine textures confined into the fovea the time course was suggestive of a serial search process even for these textons. These findings are consistent with the hypothesis that parallel processing of lines or crossings is subserved by a coarse network of detectors with relatively large receptive field and low resolution. For the counting of coloured spots in a background of a different colour the parafovea has the same time requirements as the fovea. PMID:2617862

  20. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  1. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  2. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  3. Discovering Visual Scanning Patterns in a Computerized Cancellation Test

    ERIC Educational Resources Information Center

    Huang, Ho-Chuan; Wang, Tsui-Ying

    2013-01-01

    The purpose of this study was to develop an attention sequential mining mechanism for investigating the sequential patterns of children's visual scanning process in a computerized cancellation test. Participants had to locate and cancel the target amongst other non-targets in a structured form, and a random form with Chinese stimuli. Twenty-three…

  4. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  5. The influence of cast shadows on visual search.

    PubMed

    Rensink, Ronald A; Cavanagh, Patrick

    2004-01-01

    We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. PMID:15693675

  6. Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets

    ERIC Educational Resources Information Center

    Starreveld, Peter A.; Theeuwes, Jan; Mortier, Karen

    2004-01-01

    The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those…

  7. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  8. Early visual cortical responses produced by checkerboard pattern stimulation.

    PubMed

    Shigihara, Yoshihito; Hoshi, Hideyuki; Zeki, Semir

    2016-07-01

    Visual evoked potentials have been traditionally triggered with flash or reversing checkerboard stimuli and recorded with electroencephalographic techniques, largely but not exclusively in clinical or clinically related settings. They have been crucial in determining the healthy functioning or otherwise of the visual pathways up to and including the cerebral cortex. They have typically given early response latencies of 100ms, the source of which has been attributed to V1, with the prestriate cortex being secondarily activated somewhat later. On the other hand, magnetoencephalographic studies using stimuli better tailored to the physiology of individual, specialized, visual areas have given early latencies of <50ms with the sources localized in both striate (V1) and prestriate cortex. In this study, we used the reversing checkerboard pattern as a stimulus and recorded cortical visual evoked magnetic fields with magnetoencephalography, to establish whether very early responses can be traced to (estimated) in both striate and prestriate cortex, since such a demonstration would enhance considerably the power of this classical approach in clinical investigations. Our results show that cortical responses evoked by checkerboard patterns can be detected before 50ms post-stimulus onset and that their sources can be estimated in both striate and prestriate cortex, suggesting a strong parallel input from the sub-cortex to both striate and prestriate divisions of the visual cortex. PMID:27083528

  9. Retinal waves coordinate patterned activity throughout the developing visual system

    PubMed Central

    Ackman, James B.; Burbridge, Timothy J.; Crair, Michael C.

    2014-01-01

    Summary The morphologic and functional development of the vertebrate nervous system is initially governed by genetic factors and subsequently refined by neuronal activity. However, fundamental features of the nervous system emerge before sensory experience is possible. Thus, activity-dependent development occurring before the onset of experience must be driven by spontaneous activity, but the origin and nature of activity in vivo remains largely untested. Here we use optical methods to demonstrate in live neonatal mice that waves of spontaneous retinal activity are present and propagate throughout the entire visual system before eye opening. This patterned activity encompassed the visual field, relied on cholinergic neurotransmission, preferentially initiated in the binocular retina, and exhibited spatiotemporal correlations between the two hemispheres. Retinal waves were the primary source of activity in the midbrain and primary visual cortex, but only modulated ongoing activity in secondary visual areas. Thus, spontaneous retinal activity is transmitted through the entire visual system and carries patterned information capable of guiding the activity-dependent development of complex intra- and inter- hemispheric circuits before the onset of vision. PMID:23060192

  10. Perceptual animacy: visual search for chasing objects among distractors.

    PubMed

    Meyerhoff, Hauke S; Schwan, Stephan; Huff, Markus

    2014-04-01

    Anthropomorphic interactions such as chasing are an important cue to perceptual animacy. A recent study showed that the detection of interacting (e.g., chasing) stimuli follows the regularities of a serial visual search. In the present set of experiments, we explore several variants of the chasing detection paradigm in order to investigate how human observers recognize chasing objects among distractors although there are no distinctive visual features attached to individual objects. Our results indicate that even a spatially separated presentation of potentially chasing pairs of objects requires attention at least for object selection (Experiment 1). In the chasing detection framework, a chase among nonchases is easier to find than a nonchase among chases, suggesting that cues indicating the presence of a chase prevail during chasing detection (Experiment 2). Spatial proximity is one of these cues toward the presence of a chase because decreasing the distance between chasing objects leads to shorter detection latencies (Experiment 3). Finally, our results indicate that single objects provide the basis of chasing detection rather than pairs of objects. Participants would rather search for one object that is approaching any other object in the display than for a pair of objects involved in a chase (Experiments 4 and 5). Taken together, these results suggest that participants recognize a chase by detecting one object that is approaching any of the other objects in the display. PMID:24294872

  11. Electroencephalogram assessment of mental fatigue in visual search.

    PubMed

    Fan, Xiaoli; Zhou, Qianxiang; Liu, Zhongqi; Xie, Fang

    2015-01-01

    Mental fatigue is considered to be a contributing factor responsible for numerous road accidents and various medical conditions and the efficiency and performance could be impaired during fatigue. Hence, determining how to evaluate mental fatigue is very important. In the present study, ten subjects performed a long-term visual search task with electroencephalogram recorded, and self-assessment and reaction time (RT) were combined to verify if mental fatigue had been induced and were also used as confirmatory tests for the proposed measures. The changes in relative energy in four wavebands (δ,θ,α, and β), four ratio formulas [(α+θ)/β,α/β,(α+θ)/(α+β), and θ/β], and Shannon's entropy (SE) were compared and analyzed between the beginning and end of the task. The results showed that a significant increase occurred in alpha activity in the frontal, central, posterior temporal, parietal, and occipital lobes, and a dip occurred in the beta activity in the pre-frontal, inferior frontal, posterior temporal, and occipital lobes. The ratio formulas clearly increased in all of these brain regions except the temporal region, where only α/β changed obviously after finishing the 60-min visual search task. SE significantly increased in the posterior temporal, parietal, and occipital lobes. These results demonstrate some potential indicators for mental fatigue detection and evaluation, which can be applied in the future development of countermeasures to fatigue. PMID:26405908

  12. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms.

    PubMed

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H

    2015-06-29

    In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a "fractionable" phenotype model of autism. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves, and up to another 30% manifest elevated levels of autism symptoms. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI;), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS;). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  13. Pattern-visual evoked potentials in thinner abusers.

    PubMed

    Poblano, A; Lope Huerta, M; Martínez, J M; Falcón, H D

    1996-01-01

    Organic solvents cause injury to lipids of neuronal and glial membranes. A well known characteristic of workers exposed to thinner is optic neuropathy. We decided to look for neurophysiologic signs of visual damage in patients identified as thinner abusers. Pattern reversal visual evoked potentials was performed on 34 thinner abuser patients and 30 controls. P-100 wave latency was found to be longer on abuser than control subjects. Results show the possibility of central alterations on thinner abusers despite absence of clinical symptoms. PMID:8987190

  14. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  15. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets.

    PubMed

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2011-06-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., "meow"), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., "meow") of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., "meow") should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual

  16. Visual-search observers for SPECT simulations with clinical backgrounds

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2016-03-01

    The purpose of this work was to test the ability of visual-search (VS) model observers to predict the lesion- detection performance of human observers with hybrid SPECT images. These images consist of clinical back- grounds with simulated abnormalities. The application of existing scanning model observers to hybrid images is complicated by the need for extensive statistical information, whereas VS models based on separate search and analysis processes may operate with reduced knowledge. A localization ROC (LROC) study involved the detection and localization of solitary pulmonary nodules in Tc-99m lung images. The study was aimed at op- timizing the number of iterations and the postfiltering of four rescaled block-iterative reconstruction strategies. These strategies implemented different combinations of attenuation correction, scatter correction, and detector resolution correction. For a VS observer in this study, the search and analysis processes were guided by a single set of base morphological features derived from knowledge of the lesion profile. One base set used difference-of- Gaussian channels while a second base set implemented spatial derivatives in combination with the Burgess eye filter. A feature-adaptive VS observer selected features of interest for a given image set on the basis of training-set performance. A comparison of the feature-adaptive observer results against previously acquired human-observer data is presented.

  17. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  18. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-01-01

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations. PMID:24190911

  19. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes. PMID:20508452

  20. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  1. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  2. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  3. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories. PMID:24338355

  4. Pupil diameter reflects uncertainty in attentional selection during visual search

    PubMed Central

    Geng, Joy J.; Blumenfeld, Zachary; Tyson, Terence L.; Minzenberg, Michael J.

    2015-01-01

    Pupil diameter has long been used as a metric of cognitive processing. However, recent advances suggest that the cognitive sources of change in pupil size may reflect LC-NE function and the calculation of unexpected uncertainty in decision processes (Aston-Jones and Cohen, 2005; Yu and Dayan, 2005). In the current experiments, we explored the role of uncertainty in attentional selection on task-evoked changes in pupil diameter during visual search. We found that task-evoked changes in pupil diameter were related to uncertainty during attentional selection as measured by reaction time (RT) and performance accuracy (Experiments 1-2). Control analyses demonstrated that the results are unlikely to be due to error monitoring or response uncertainty. Our results suggest that pupil diameter can be used as an implicit metric of uncertainty in ongoing attentional selection requiring effortful control processes. PMID:26300759

  5. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms

    PubMed Central

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H.; Baron-Cohen, Simon; Bolton, Patrick; Cheung, Celeste; Davies, Kim; Liew, Michelle; Fernandes, Janice; Gammer, Issy; Maris, Helen; Salomone, Erica; Pasco, Greg; Pickles, Andrew; Ribeiro, Helena; Tucker, Leslie

    2015-01-01

    Summary In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception [1]. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils [2–5]. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated [4, 6]. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a “fractionable” phenotype model of autism [7]. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves [8], and up to another 30% manifest elevated levels of autism symptoms [9]. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI; [10]), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS; [11]). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  6. Visual Interactions Conform to Pattern Decorrelation in Multiple Cortical Areas

    PubMed Central

    Sharifian, Fariba; Nurminen, Lauri; Vanni, Simo

    2013-01-01

    Neural responses to visual stimuli are strongest in the classical receptive field, but they are also modulated by stimuli in a much wider region. In the primary visual cortex, physiological data and models suggest that such contextual modulation is mediated by recurrent interactions between cortical areas. Outside the primary visual cortex, imaging data has shown qualitatively similar interactions. However, whether the mechanisms underlying these effects are similar in different areas has remained unclear. Here, we found that the blood oxygenation level dependent (BOLD) signal spreads over considerable cortical distances in the primary visual cortex, further than the classical receptive field. This indicates that the synaptic activity induced by a given stimulus occurs in a surprisingly extensive network. Correspondingly, we found suppressive and facilitative interactions far from the maximum retinotopic response. Next, we characterized the relationship between contextual modulation and correlation between two spatial activation patterns. Regardless of the functional area or retinotopic eccentricity, higher correlation between the center and surround response patterns was associated with stronger suppressive interaction. In individual voxels, suppressive interaction was predominant when the center and surround stimuli produced BOLD signals with the same sign. Facilitative interaction dominated in the voxels with opposite BOLD signal signs. Our data was in unison with recently published cortical decorrelation model, and was validated against alternative models, separately in different eccentricities and functional areas. Our study provides evidence that spatial interactions among neural populations involve decorrelation of macroscopic neural activation patterns, and suggests that the basic design of the cerebral cortex houses a robust decorrelation mechanism for afferent synaptic input. PMID:23874491

  7. Spatial and temporal dynamics of visual search tasks distinguish subtypes of unilateral spatial neglect: Comparison of two cases with viewer-centered and stimulus-centered neglect.

    PubMed

    Mizuno, Katsuhiro; Kato, Kenji; Tsuji, Tetsuya; Shindo, Keiichiro; Kobayashi, Yukiko; Liu, Meigen

    2016-08-01

    We developed a computerised test to evaluate unilateral spatial neglect (USN) using a touchscreen display, and estimated the spatial and temporal patterns of visual search in USN patients. The results between a viewer-centered USN patient and a stimulus-centered USN patient were compared. Two right-brain-damaged patients with USN, a patient without USN, and 16 healthy subjects performed a simple cancellation test, the circle test, a visuomotor search test, and a visual search test. According to the results of the circle test, one USN patient had stimulus-centered neglect and a one had viewer-centered neglect. The spatial and temporal patterns of these two USN patients were compared. The spatial and temporal patterns of cancellation were different in the stimulus-centered USN patient and the viewer-centered USN patient. The viewer-centered USN patient completed the simple cancellation task, but paused when transferring from the right side to the left side of the display. Unexpectedly, this patient did not exhibit rightward attention bias on the visuomotor and visual search tests, but the stimulus-centered USN patient did. The computer-based assessment system provided information on the dynamic visual search strategy of patients with USN. The spatial and temporal pattern of cancellation and visual search were different across the two patients with different subtypes of neglect. PMID:26059555

  8. Role of computer-assisted visual search in mammographic interpretation

    NASA Astrophysics Data System (ADS)

    Nodine, Calvin F.; Kundel, Harold L.; Mello-Thoms, Claudia; Weinstein, Susan P.

    2001-06-01

    We used eye-position data to develop Computer-Assisted Visual Search (CAVS) as an aid to mammographic interpretation. CAVS feeds back regions of interest that receive prolonged visual dwell (greater than or equal to 1000 ms) by highlighting them on the mammogram. These regions are then reevaluated for possible missed breast cancers. Six radiology residents and fellows interpreted a test set of 40 mammograms twice, once with CAVS feedback (FB), and once without CAVS FB in a crossover, repeated- measures design. Eye position was monitored. LROC performance (area) was compared with and without CAVS FB. Detection and localization of malignant lesions improved 12% with CAVS FB. This was not significant. The test set contained subtle malignant lesions. 65% (176/272) of true lesions were fixated. Of those fixated, 49% (87/176) received prolonged attention resulting in CAVS FB, and 54% (47/87) of FBs resulted in TPs. Test-set difficulty and the lack of reading experience of the readers may have contributed to the relatively low overall performance, and may have also limited the effectiveness of CAVS FB which could only play a role in localizing potential lesions if the reader fixated and dwelled on them.

  9. Expectations developed over multiple timescales facilitate visual search performance

    PubMed Central

    Gekas, Nikos; Seitz, Aaron R.; Seriès, Peggy

    2015-01-01

    Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level. PMID:26200891

  10. GEON Developments for Searching, Accessing, and Visualizing Distributed Data

    NASA Astrophysics Data System (ADS)

    Meertens, C.; Seber, D.; Baru, C.; Wright, M.

    2005-12-01

    The NSF-funded GEON (Geosciences Network) Information Technology Research project is developing data sharing frameworks, a registry for distributed databases, concept-based search mechanisms, advanced visualization software, and grid-computing resources for earth science and education applications. The goal of this project is to enable new interdisciplinary research in the geosciences, while extending the access to data and complex modeling tools from the hands of a few researchers to a much broader set of scientific and educational users. To facilitate this, the GEON team of IT scientists, geoscientists, and educators and their collaborators are creating a capable Cyberinfrastructure that is based on grid/web services operating in a distributed environment. We are using a best practices approach that is designed to provide useful and usable capabilities and tools. With the realization of new large scale projects such as EarthScope that involve the collection, analysis, and modeling of vast quantities of diverse data, it is increasingly important to be able to effectively handle, model, and integrate a wide range of multi-dimensional, multi-parameter, and time dependent data in a timely fashion. GEON has been developing a process where the user can discover, access, retrieve and visualize data that is hosted either at GEON or at distributed servers. Whenever possible, GEON is using established protocols and formats for data and metadata exchange that are based on community efforts such as OPeNDAP, the Open GIS Consortium, Grid Computing, and digital libraries. This approach is essential to help overcome the challenges of dealing with heterogeneous distributed data and increases the possibility of data interoperability. We give an overview of resources that are now available to access and visualize a variety of geological and geophysical data, derived products and models including GPS data, GPS-derived velocity vectors and strain rates, earthquakes, three

  11. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback. PMID:26529699

  12. Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

  13. Preemption Effects in Visual Search: Evidence for Low-Level Grouping.

    ERIC Educational Resources Information Center

    Rensink, Ronald A.; Enns, James T.

    1995-01-01

    Eight experiments, each with 10 observers in each condition, show that the visual search for Mueller-Lyer stimuli is based on complete configurations rather than component segments with preemption by low-level groups. Results support the view that rapid visual search can only access higher level, more ecologically relevant structures. (SLD)

  14. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  15. Overlapping multivoxel patterns for two levels of visual expectation

    PubMed Central

    de Gardelle, Vincent; Stokes, Mark; Johnen, Vanessa M.; Wyart, Valentin; Summerfield, Christopher

    2013-01-01

    According to predictive accounts of perception, visual cortical regions encode sensory expectations about the external world, and the violation of those expectations by inputs (surprise). Here, using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we asked whether expectations and surprise activate the same pattern of voxels, in face-sensitive regions of the extra-striate visual cortex (the fusiform face area or FFA). Participants viewed pairs of repeating or alternating faces, with high or low probability of repetitions. As in previous studies, we found that repetition suppression (the attenuated BOLD response to repeated stimuli) in the FFA was more pronounced for probable repetitions, consistent with it reflecting reduced surprise to anticipated inputs. Secondly, we observed that repetition suppression and repetition enhancement responses were both consistent across scanner runs, suggesting that both have functional significance, with repetition enhancement possibly indicating the build up of sensory expectation. Critically, we also report that multi-voxels patterns associated with probability and repetition effects were significantly correlated within the left FFA. We argue that repetition enhancement responses and repetition probability effects can be seen as two types of expectation signals, occurring simultaneously, although at different processing levels (lower vs. higher), and different time scales (immediate vs. long term). PMID:23630488

  16. Visual Search Revived: The Slopes Are Not That Slippery: A Reply to Kristjansson (2015)

    PubMed Central

    2016-01-01

    Kristjansson (2015) suggests that standard research methods in the study of visual search should be “reconsidered.” He reiterates a useful warning against treating reaction time x set size functions as simple metrics that can be used to label search tasks as “serial” or “parallel.” However, I argue that he goes too far with a broad attack on the use of slopes in the study of visual search. Used wisely, slopes do provide us with insight into the mechanisms of visual search. PMID:27433330

  17. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness. PMID:20973730

  18. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  19. Roughness determination by direct visual observation of the speckle pattern

    NASA Astrophysics Data System (ADS)

    Rebollo, M. A.; Landau, M. R.; Hogert, E. N.; Gaggioli, N. G.; Muramatsu, M.

    1995-12-01

    There are mechanical and optical methods of measuring the roughness of surfaces. Mechanical methods are of a destructive type, while optical methods, although they are non-destructive, involve relatively complex systems and calculations. In this work a simple method is introduced, which allows one—through the direct observation of the speckle pattern—to make a visual correlation, comparing the first pattern with others obtained when the beam incidence angle varies. With this method it is possible to obtain results with acceptable accuracy for many industrial uses.

  20. Visual-auditory integration for visual search: a behavioral study in barn owls.

    PubMed

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID

  1. Visualizing Neuronal Network Connectivity with Connectivity Pattern Tables

    PubMed Central

    Nordlie, Eilen; Plesser, Hans Ekkehard

    2009-01-01

    Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs) as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables. PMID:20140265

  2. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  3. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. PMID:24051777

  4. Bicycle accidents and drivers' visual search at left and right turns.

    PubMed

    Summala, H; Pasanen, E; Räsänen, M; Sievänen, J

    1996-03-01

    The accident data base of the City of Helsinki shows that when drivers cross a cycle path as they enter a non-signalized intersection, the clearly dominant type of car-cycle crashes is that in which a cyclist comes from the right and the driver is turning right, in marked contrast to the cases with drivers turning left (Pasanen 1992; City of Helsinki, Traffic Planning Department, Report L4). This study first tested an explanation that drivers turning right simply focus their attention on the cars coming from the left-those coming from the right posing no threat to them-and fail to see the cyclist from the right early enough. Drivers' scanning behavior was studied at two T-intersections. Two well-hidden video cameras were used, one to measure the head movements of the approaching drivers and the other one to measure speed and distance from the cycle crossroad. The results supported the hypothesis: the drivers turning right scanned the right leg of the T-intersection less frequently and later than those turning left. Thus, it appears that drivers develop a visual scanning strategy which concentrates on detection of more frequent and major dangers but ignores and may even mask visual information on less frequent dangers. The second part of the study evaluated different countermeasures, including speed humps, in terms of drivers' visual search behavior. The results suggested that speed-reducing countermeasures changed drivers' visual search patterns in favor of the cyclists coming from the right, presumably at least in part due to the fact that drivers were simply provided with more time to focus on each direction. PMID:8703272

  5. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  6. Pattern Visual Evoked Potentials in Dyslexic versus Normal Children

    PubMed Central

    Heravian, Javad; Sobhani-Rad, Davood; Lari, Samaneh; Khoshsima, Mohamadjavad; Azimi, Abbas; Ostadimoghaddam, Hadi; Yekta, Abbasali; Hoseini-Yazdi, Seyed Hosein

    2015-01-01

    Purpose: Presence of neurophysiological abnormalities in dyslexia has been a conflicting issue. This study was performed to evaluate the role of sensory visual deficits in the pathogenesis of dyslexia. Methods: Pattern visual evoked potentials (PVEP) were recorded in 72 children including 36 children with dyslexia and 36 children without dyslexia (controls) who were matched for age, sex and intelligence. Two check sizes of 15 and 60 min of arc were used with temporal frequencies of 1.5 Hz for transient and 6 Hz for steady-state methods. Results: Mean latency and amplitude values for 15 min arc and 60 min arc check sizes using steady state and transient methods showed no significant difference between the two study groups (P values: 0.139/0.481/0.356/0.062). Furthermore, no significant difference was observed between two methods of PVEPs in dyslexic and normal children using 60 min arc with high contrast (P values: 0.116, 0.402, 0.343 and 0.106). Conclusion: The sensitivity of PVEP has high validity to detect visual deficits in children with dyslexic problem. However, no significant difference was found between dyslexia and normal children using high contrast stimuli. PMID:26730313

  7. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  8. Visual Search Is Postponed during the Attentional Blink until the System Is Suitably Reconfigured

    ERIC Educational Resources Information Center

    Ghorashi, S. M. Shahab; Smilek, Daniel; Di Lollo, Vincent

    2007-01-01

    J. S. Joseph, M. M. Chun, and K. Nakayama (1997) found that pop-out visual search was impaired as a function of intertarget lag in an attentional blink (AB) paradigm in which the 1st target was a letter and the 2nd target was a search display. In 4 experiments, the present authors tested the implication that search efficiency should be similarly…

  9. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  10. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  11. Dynamic Analysis and Pattern Visualization of Forest Fires

    PubMed Central

    Lopes, António M.; Tenreiro Machado, J. A.

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  12. Electrophysiological measurement of information flow during visual search

    PubMed Central

    Cosman, Joshua D.; Arita, Jason T.; Ianni, Julianna D.; Woodman, Geoffrey F.

    2016-01-01

    The temporal relationship between different stages of cognitive processing is long-debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  13. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  14. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    PubMed Central

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  15. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    SciTech Connect

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  16. Visual pattern recognition network: its training algorithm and its optoelectronic architecture

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren

    1996-07-01

    A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition.

  17. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. PMID:24319995

  18. Similarity and heterogeneity effects in visual search are mediated by "segmentability".

    PubMed

    Utochkin, Igor S; Yurevich, Maria A

    2016-07-01

    The heterogeneity of our visual environment typically reduces the speed with which a singleton target can be found. Visual search theories explain this phenomenon via nontarget similarities and dissimilarities that affect grouping, perceptual noise, and so forth. In this study, we show that increasing the heterogeneity of a display can facilitate rather than inhibit visual search for size and orientation singletons when heterogeneous features smoothly fill the transition between highly distinguishable nontargets. We suggest that this smooth transition reduces the "segmentability" of dissimilar items to otherwise separate subsets, causing the visual system to treat them as a near-homogenous set standing apart from a singleton. (PsycINFO Database Record PMID:26784002

  19. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  20. Threat modulation of visual search efficiency in PTSD: A comparison of distinct stimulus categories.

    PubMed

    Olatunji, Bunmi O; Armstrong, Thomas; Bilsky, Sarah A; Zhao, Mimi

    2015-10-30

    Although an attentional bias for threat has been implicated in posttraumatic stress disorder (PTSD), the cues that best facilitate this bias are unclear. Some studies utilize images and others utilize facial expressions that communicate threat. However, the comparability of these two types of stimuli in PTSD is unclear. The present study contrasted the effects of images and expressions with the same valence on visual search among veterans with PTSD and controls. Overall, PTSD patients had slower visual search speed than controls. Images caused greater disruption in visual search than expressions, and emotional content modulated this effect with larger differences between images and expressions arising for more negatively valenced stimuli. However, this effect was not observed with the maximum number of items in the search array. Differences in visual search speed by images and expressions significantly varied between PTSD patients and controls for only anger and at the moderate level of task difficulty. Specifically, visual search speed did not significantly differ between PTSD patients and controls when exposed to angry expressions. However, PTSD patients displayed significantly slower visual search than controls when exposed to anger images. The implications of these findings for better understanding emotion modulated attention in PTSD are discussed. PMID:26254798

  1. Dementia alters standing postural adaptation during a visual search task in older adult men

    PubMed Central

    Joŕdan, Azizah J.; McCarten, J. Riley; Rottunda, Susan; Stoffregen, Thomas A.; Manor, Brad; Wade, Michael G.

    2015-01-01

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance—in the non-dementia group only—suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus appears to disrupt this perception-action synergy. PMID:25770830

  2. Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations

    NASA Astrophysics Data System (ADS)

    Bays, Hillary

    An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

  3. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  4. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  5. Strategies of the honeybee Apis mellifera during visual search for vertical targets presented at various heights: a role for spatial attention?

    PubMed Central

    Morawetz, Linde; Chittka, Lars; Spaethe, Johannes

    2014-01-01

    When honeybees are presented with a colour discrimination task, they tend to choose swiftly and accurately when objects are presented in the ventral part of their frontal visual field. In contrast, poor performance is observed when objects appear in the dorsal part. Here we investigate if this asymmetry is caused by fixed search patterns or if bees can use alternative search mechanisms such as spatial attention, which allows flexible focusing on different areas of the visual field. We asked individual honeybees to choose an orange rewarded target among blue distractors. Target and distractors were presented in the ventral visual field, the dorsal field or both. Bees presented with targets in the ventral visual field consistently had the highest search efficiency, with rapid decisions, high accuracy and direct flight paths. In contrast, search performance for dorsally located targets was inaccurate and slow at the beginning of the test phase, but bees increased their search performance significantly after a few learning trials: they found the target faster, made fewer errors and flew in a straight line towards the target. However, bees needed thrice as long to improve the search for a dorsally located target when the target’s position changed randomly between the ventral and the dorsal visual field. We propose that honeybees form expectations of the location of the target’s appearance and adapt their search strategy accordingly. Different possible mechanisms of this behavioural adaptation are discussed. PMID:25254109

  6. Pattern visual evoked potentials in the assessment of objective visual acuity in amblyopic children.

    PubMed

    Gundogan, Fatih C; Mutlu, Fatih M; Altinsoy, H Ibrahim; Tas, Ahmet; Oz, Oguzhan; Sobaci, Gungor

    2010-08-01

    The aim of this study was to determine the value of pattern visual evoked potentials (PVEP) to five consecutive check size patterns in the assessment of visual acuity (VA) in children. One hundred unilateral amblyopic (study group) and 90 healthy children with best-corrected visual acuity (BCVA) of 1.0 (control group) were planned to be included. PVEP responses to five consecutive check sizes (2 degrees , 1 degrees , 30', 15', and 7') which are assumed to correspond to VAs of 0.1, 0.2, 0.4, 0.7 and 1.0 Snellen lines were recorded in both groups. Eighty-five children in the study group (85.0%) and 74 children in the control group (82.2%) who cooperated well with PVEP testing were included. Normal values for latency, amplitude, and normalized interocular amplitude/latency difference in each check size were defined in the control group. PVEP-estimated VA (PVEP-VA) in the amblyopic eye was defined by the normal PVEP responses to the smallest check size associated with normal interocular difference from the non-amblyopic eye, and was considered predictive if it is within +/-1 Snellen line (1 decimal) discrepancy with BCVA in that eye. Mean age was 9.7 +/- 1.9 and 9.9 +/- 2.2 years in the study and the control groups, respectively. LogMAR (logarithm of minimum angle of resolution) Snellen acuity was well correlated with the logMAR PVEP-VA (r = 0.525, P < 0.001) in the study group. The Snellen line discrepancy between BCVA and PVEP-VA was within +/-1 Snellen line in 57.6% of the eyes. PVEP to five consecutive check sizes may predict objective VA in amblyopic children. PMID:20376691

  7. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  8. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  9. Individual differences in visual search: relationship to autistic traits, discrimination thresholds, and speed of processing.

    PubMed

    Brock, Jon; Xu, Jing Y; Brooks, Kevin R

    2011-01-01

    Enhanced visual search is widely reported in autism. Here we note a similar advantage for university students self-reporting higher levels of autism-like traits. Contrary to prevailing theories of autism, performance was not associated with perceptual-discrimination thresholds for the same stimuli, but was associated with inspection-time threshold--a measure of speed of perceptual processing. Enhanced visual search in autism may, therefore, at least partially be explained by faster speed of processing. PMID:21936301

  10. Locally-adaptive and memetic evolutionary pattern search algorithms.

    PubMed

    Hart, William E

    2003-01-01

    Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096

  11. Development of a flow visualization apparatus. [to study convection flow patterns

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.

    1975-01-01

    The use of an optical flow visualization device for studying convection flow patterns was investigated. The investigation considered use of a shadowgraph, schlieren and other means for visualizing the flow. A laboratory model was set up to provide data on the proper optics and photography procedures to best visualize the flow. A preliminary design of a flow visualization system is provided as a result of the study. Recommendations are given for a flight test program utilizing the flow visualization apparatus.

  12. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  13. Person perception informs understanding of cognition during visual search.

    PubMed

    Brennan, Allison A; Watson, Marcus R; Kingstone, Alan; Enns, James T

    2011-08-01

    Does person perception--the impressions we form from watching others--hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers' video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive. PMID:21626239

  14. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  15. Learning by Selection: Visual Search and Object Perception in Young Infants

    ERIC Educational Resources Information Center

    Amso, Dima; Johnson, Scott P.

    2006-01-01

    The authors examined how visual selection mechanisms may relate to developing cognitive functions in infancy. Twenty-two 3-month-old infants were tested in 2 tasks on the same day: perceptual completion and visual search. In the perceptual completion task, infants were habituated to a partly occluded moving rod and subsequently presented with …

  16. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  17. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  18. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  19. The preview benefit in single-feature and conjunction search: Constraints of visual marking.

    PubMed

    Meinhardt, Günter; Persike, Malte

    2015-01-01

    Previewing distracters enhances the efficiency of visual search. Watson and Humphreys (1997) proposed that the preview benefit rests on visual marking, a mechanism which actively encodes distracter locations at preview and inhibits them afterwards at search. As Watson and Humphreys did, we used a letter-color search task to study constraints of visual marking in conjunction search and near-efficient single-feature search with single-colored and homogeneous distracter letters. Search performance was measured for fixed target and distracter features (block design) and for randomly changed features across trials (random design). In single-feature search there was a full preview benefit for both block and random designs. In conjunction search a full preview benefit was obtained only for the block design; randomly changing target and distracter features disrupted the preview benefit. However, the preview benefit was restored when the distracters were organized in spatially coherent blocks. These findings imply that the temporal segregation of old and new items is sufficient for visual marking in near-efficient single-feature search, while in conjunction search it is not. We propose a supplanting grouping principle for the preview benefit: When the new items add a new color, conjunction search is initialized and attentional resources are withdrawn from the marking mechanism. Visual marking can be restored by a second grouping principle that joins with temporal asynchrony. This principle can be either spatial or feature based. In the case of the latter, repetition priming is necessary to establish joint grouping by color and temporal asynchrony. PMID:26382004

  20. Long-Term Memory Search across the Visual Brain

    PubMed Central

    Fedurco, Milan

    2012-01-01

    Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI) in humans together with molecular biology studies (animal models) aiming to understand how the retinal image gets transformed into so-called visual (retinotropic) maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR) and β2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF) and protein kinase PKMζ. PMID:22900206

  1. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  2. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  3. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  4. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  5. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  6. Visual Search and Line Bisection in Hemianopia: Computational Modelling of Cortical Compensatory Mechanisms and Comparison with Hemineglect

    PubMed Central

    Lanyon, Linda J.; Barton, Jason J. S.

    2013-01-01

    Hemianopia patients have lost vision from the contralateral hemifield, but make behavioural adjustments to compensate for this field loss. As a result, their visual performance and behaviour contrast with those of hemineglect patients who fail to attend to objects contralateral to their lesion. These conditions differ in their ocular fixations and perceptual judgments. During visual search, hemianopic patients make more fixations in contralesional space while hemineglect patients make fewer. During line bisection, hemianopic patients fixate the contralesional line segment more and make a small contralesional bisection error, while hemineglect patients make few contralesional fixations and a larger ipsilesional bisection error. Hence, there is an attentional failure for contralesional space in hemineglect but a compensatory adaptation to attend more to the blind side in hemianopia. A challenge for models of visual attentional processes is to show how compensation is achieved in hemianopia, and why such processes are hindered or inaccessible in hemineglect. We used a neurophysiology-derived computational model to examine possible cortical compensatory processes in simulated hemianopia from a V1 lesion and compared results with those obtained with the same processes under conditions of simulated hemineglect from a parietal lesion. A spatial compensatory bias to increase attention contralesionally replicated hemianopic scanning patterns during visual search but not during line bisection. To reproduce the latter required a second process, an extrastriate lateral connectivity facilitating form completion into the blind field: this allowed accurate placement of fixations on contralesional stimuli and reproduced fixation patterns and the contralesional bisection error of hemianopia. Neither of these two cortical compensatory processes was effective in ameliorating the ipsilesional bias in the hemineglect model. Our results replicate normal and pathological patterns of

  7. Performance of visual search tasks from various types of contour information.

    PubMed

    Itan, Liron; Yitzhaky, Yitzhak

    2013-03-01

    A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper. PMID:23456115

  8. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects. PMID:25262317

  9. Pigeons show efficient visual search by category: effects of typicality and practice.

    PubMed

    Ohkita, Midori; Jitsumori, Masako

    2012-11-01

    Three experiments investigated category search in pigeons, using an artificial category created by morphing of human faces. Four pigeons were trained to search for category members among nonmembers, with each target item consisting of an item-specific component and a common component diagnostic of the category. Experiment 1 found that search was more efficient with homogeneous than heterogeneous distractors. In Experiment 2, the pigeons successfully searched for target exemplars having novel item-specific components. Practice including these items enabled the pigeons to efficiently search for the highly familiar members. The efficient search transferred immediately to more typical novel exemplars in Experiment 3. With further practice, the pigeons eventually developed efficient search for individual less typical exemplars. Results are discussed in the context of visual search theories and automatic processing of individual exemplars. PMID:23022550

  10. Mouse Visual Neocortex Supports Multiple Stereotyped Patterns of Microcircuit Activity

    PubMed Central

    Sadovsky, Alexander J.

    2014-01-01

    Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701

  11. Common Visual Pattern Discovery via Nonlinear Mean Shift Clustering.

    PubMed

    Wang, Linbo; Tang, Dong; Guo, Yanwen; Do, Minh N

    2015-12-01

    Discovering common visual patterns (CVPs) from two images is a challenging task due to the geometric and photometric deformations as well as noises and clutters. The problem is generally boiled down to recovering correspondences of local invariant features, and the conventionally addressed by graph-based quadratic optimization approaches, which often suffer from high computational cost. In this paper, we propose an efficient approach by viewing the problem from a novel perspective. In particular, we consider each CVP as a common object in two images with a group of coherently deformed local regions. A geometric space with matrix Lie group structure is constructed by stacking up transformations estimated from initially appearance-matched local interest region pairs. This is followed by a mean shift clustering stage to group together those close transformations in the space. Joining regions associated with transformations of the same group together within each input image forms two large regions sharing similar geometric configuration, which naturally leads to a CVP. To account for the non-Euclidean nature of the matrix Lie group, mean shift vectors are derived in the corresponding Lie algebra vector space with a newly provided effective distance measure. Extensive experiments on single and multiple common object discovery tasks as well as near-duplicate image retrieval verify the robustness and efficiency of the proposed approach. PMID:26415176

  12. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record PMID:27295466

  13. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  14. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  15. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  16. Effects of targets embedded within words in a visual search task

    PubMed Central

    Grabbe, Jeremy W.

    2014-01-01

    Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands). PMID:24855497

  17. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  18. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  19. Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-09-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD. PMID:27334873

  20. Markov Models of Search State Patterns in a Hypertext Information Retrieval System.

    ERIC Educational Resources Information Center

    Qiu, Liwen

    1993-01-01

    Describes research that was conducted to determine the search state patterns through which users retrieve information in hypertext systems. Use of the Markov model to describe users' search behavior is discussed, and search patterns of different user groups were studied by comparing transition probability matrices. (Contains 25 references.) (LRW)

  1. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  2. "Hot" facilitation of "cool" processing: emotional distraction can enhance priming of visual search.

    PubMed

    Kristjánsson, Árni; Óladóttir, Berglind; Most, Steven B

    2013-02-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally evocative photographs. Search performance was generally impaired after emotional pictures, but improvement (measured both with inverse efficiency and sensitivity to briefly presented targets) as a function of incremental between-trial target-color repetition was strongest after emotional pictures. For observers showing the largest general effect of emotional pictures, there was a reversal, with performance becoming better after neutral pictures than after four or more trials containing the same search target. This suggests that although emotional pictures disrupt effortful attention, this detriment can be overcome--to the point where performance is enhanced by emotional stimuli--when the task involves prepotent task priorities. PMID:22642218

  3. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills. PMID:23460295

  4. Direct visualization of a DNA glycosylase searching for damage.

    PubMed

    Chen, Liwei; Haushalter, Karl A; Lieber, Charles M; Verdine, Gregory L

    2002-03-01

    DNA glycosylases preserve the integrity of genetic information by recognizing damaged bases in the genome and catalyzing their excision. It is unknown how DNA glycosylases locate covalently modified bases hidden in the DNA helix amongst vast numbers of normal bases. Here we employ atomic-force microscopy (AFM) with carbon nanotube probes to image search intermediates of human 8-oxoguanine DNA glycosylase (hOGG1) scanning DNA. We show that hOGG1 interrogates DNA at undamaged sites by inducing drastic kinks. The sharp DNA bending angle of these non-lesion-specific search intermediates closely matches that observed in the specific complex of 8-oxoguanine-containing DNA bound to hOGG1. These findings indicate that hOGG1 actively distorts DNA while searching for damaged bases. PMID:11927259

  5. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search.

    PubMed

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  6. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search

    PubMed Central

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I.

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  7. On the application of evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1997-02-01

    This paper presents an experimental evaluation of evolutionary pattern search algorithms (EPSAs). Our experimental evaluation of EPSAs indicates that EPSAs can achieve similar performance to EAs on challenging global optimization problems. Additionally, we describe a stopping rule for EPSAs that reliably terminated them near a stationary point of the objective function. The ability for EPSAs to reliably terminate near stationary points offers a practical advantage over other EAs, which are typically stopped by heuristic stopping rules or simple bounds on the number of iterations. Our experiments also illustrate how the rate of the crossover operator can influence the tradeoff between the number of iterations before termination and the quality of the solution found by an EPSA.

  8. The Mechanisms Underlying the ASD Advantage in Visual Search

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik

    2016-01-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in "Neuron" 48:497-507, 2005; Simmons et al. in "Vis Res" 49:2705-2739, 2009). This…

  9. Searching for Meaning: Visual Culture from an Anthropological Perspective

    ERIC Educational Resources Information Center

    Stokrocki, Mary

    2006-01-01

    In this article, the author discusses the importance of Viktor Lowenfeld's influence on her research, describes visual anthropology, gives examples of her research, and examines the implications of this type of research for teachers. The author regards Lowenfeld's (1952/1939) early work with children in Austria as a form of participant observation…

  10. Optimal scale-free searching strategies for the location of moving targets: New insights on visually cued mate location behaviour in insects

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    2006-12-01

    The most efficient random Lévy-flight (scale-free) searching strategies for the location of moving targets are identified. Brownian targets are best caught using ballistic (long straight-line) searches, and vice versa. Brownian searches and ballistic searches are close to being optimal for the capture of a Lévy-flyer whose flight-segment lengths are distributed according to an inverse-square law. The movement patterns of some foragers are characterised by such an inverse-square law and these are known to constitute an optimal searching strategy for the location of randomly and sparsely distributed stationary resources. It is suggested that visually cued mate location behaviour in butterflies and in some other insects can be understood within the context of optimal scale-free searching strategies for the location of moving targets.

  11. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

  12. Searching the Visual Arts: An Analysis of Online Information Access.

    ERIC Educational Resources Information Center

    Brady, Darlene; Serban, William

    1981-01-01

    A search for stained glass bibliographic information using DIALINDEX identified 57 DIALOG files from a variety of subject categories and 646 citations as relevant. Files include applied science, biological sciences, chemistry, engineering, environment/pollution, people, business research, and public affairs. Eleven figures illustrate the search…

  13. Visualizing Document Classification: A Search Aid for the Digital Library.

    ERIC Educational Resources Information Center

    Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

    2000-01-01

    Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

  14. Fruitful visual search: inhibition of return in a virtual foraging task.

    PubMed

    Thomas, Laura E; Ambinder, Michael S; Hsieh, Brendon; Levinthal, Brian; Crowell, James A; Irwin, David E; Kramer, Arthur F; Lleras, Alejandro; Simons, Daniel J; Wang, Ranxiao Frances

    2006-10-01

    Inhibition of return (IOR) has long been viewed as a foraging facilitator in visual search. We investigated the contribution of IOR in a task that approximates natural foraging more closely than typical visual search tasks. Participants in a fully immersive virtual reality environment manually searched an array of leaves for a hidden piece of fruit, using a wand to select and examine each leaf location. Search was slower than in typical IOR paradigms, taking seconds instead of a few hundred milliseconds. Participants also made a speeded response when they detected a flashing leaf that either was or was not in a previously searched location. Responses were slower when the flashing leaf was in a previously searched location than when it was in an unvisited location. These results generalize IOR to an approximation of a naturalistic visual search setting and support the hypothesis that IOR can facilitate foraging. The experiment also constitutes the first use of a fully immersive virtual reality display in the study of IOR. PMID:17328391

  15. Target features and target-distractor relation are both primed in visual search.

    PubMed

    Meeter, Martijn; Olivers, Christian N L

    2014-04-01

    Intertrial priming in visual search is the finding that repeating target and distractor features from one trial to the next speeds up search, relative to when these features change. Recently, Becker (2008) reported evidence that it is not so much the repetition of absolute feature values that causes priming, but repetition of the relation between target and distractors. For example, in search for a unique size, the size of the search elements may change from trial to trial, but this does not hurt performance as long as the target remains consistently larger (or smaller) than the distractors. Becker (2008) concluded that such findings are difficult to reconcile with existing theory. Here, we replicate the findings in the dimensions of size, color, and luminance and show that these effects are not due to the magnitude of feature changes or to search strategies, as may be induced by blocking versus mixing different types of intertrial changes experienced by observers. However, we show that repeating a feature from one trial to the next does convey a benefit above and beyond repeating the target-distractor relation. We argue that both effects can be readily accounted for within current models of visual search. Priming of relations results when one assumes the existence of cardinal feature channels, as do most models of visual search. Additional priming of specific values results when one assumes broadly distributed, overlapping feature channels. PMID:24415176

  16. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study.

    PubMed

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-01-01

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in "pop-out" or "search" condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the "search" condition, while PPC is mainly involved in detecting "pop-out" targets. PMID:27452715

  17. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  18. Climate and colored walls: in search of visual comfort

    NASA Astrophysics Data System (ADS)

    Arrarte-Grau, Malvina

    2002-06-01

    The quality of natural light, the landscape surrounds and the techniques of construction are important factors in the selection of architectural colors. Observation of exterior walls in differentiated climates allows the recognition of particularities in the use of color which satisfy the need for visual comfort. At a distance of 2000 kilometers along the coast of Peru, Lima and Mancora at 12° and 4° respectively, are well defined for their climatic characteristics: in Mancora sunlight causes high reflection, in Lima overcast sky and high humidity cause glare. The study of building color effects at these locations serves to illustrate that color values may be controlled in order to achieve visual comfort and contribute to color identity.

  19. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  20. How You Use It Matters: Object Function Guides Attention During Visual Search in Scenes.

    PubMed

    Castelhano, Monica S; Witherspoon, Richelle L

    2016-05-01

    How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an object's function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects' functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology. PMID:27022016

  1. Quantifying the performance limits of human saccadic targeting during visual search

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Beutter, B. R.; Stone, L. S.

    2001-01-01

    In previous studies of saccadic targeting, the issue how visually guided saccades to unambiguous targets are programmed and executed has been examined. These studies have found different degrees of guidance for saccades depending on the task and task difficulty. In this study, we use ideal-observer analysis to estimate the visual information used for the first saccade during a search for a target disk in noise. We quantitatively compare the performance of the first saccadic decision to that of the ideal observer (ie absolute efficiency of the first saccade) and to that of the associated final perceptual decision at the end of the search (ie relative efficiency of the first saccade). Our results show, first, that at all levels of salience tested, the first saccade is based on visual information from the stimulus display, and its highest absolute efficiency is approximately 20%. Second, the efficiency of the first saccade is lower than that of the final perceptual decision after active search (with eye movements) and has a minimum relative efficiency of 19% at the lowest level of saliency investigated. Third, we found that requiring observers to maintain central fixation (no saccades allowed) decreased the absolute efficiency of their perceptual decision by up to a factor of two, but that the magnitude of this effect depended on target salience. Our results demonstrate that ideal-observer analysis can be extended to measure the visual information mediating saccadic target-selection decisions during visual search, which enables direct comparison of saccadic and perceptual efficiencies.

  2. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  3. Faster than the speed of rejection: Object identification processes during visual search for multiple targets

    PubMed Central

    Godwin, Hayward J.; Walenchok, Stephen C.; Houpt, Joseph W.; Hout, Michael C.; Goldinger, Stephen D.

    2015-01-01

    When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this “dual-target cost” has primarily focused on the breakdown of attention guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, in order to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally. PMID:25938253

  4. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  5. Faster than the speed of rejection: Object identification processes during visual search for multiple targets.

    PubMed

    Godwin, Hayward J; Walenchok, Stephen C; Houpt, Joseph W; Hout, Michael C; Goldinger, Stephen D

    2015-08-01

    When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this "dual-target cost" has primarily focused on the breakdown of attentional guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally. PMID:25938253

  6. Memory and visual search in naturalistic 2D and 3D environments.

    PubMed

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  7. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers. PMID:25678271

  8. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  9. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  10. Binocular saccade coordination in reading and visual search: a developmental study in typical reader and dyslexic children

    PubMed Central

    Seassau, Magali; Gérard, Christophe Loic; Bui-Quoc, Emmanuel; Bucci, Maria Pia

    2014-01-01

    Studies dealing with developmental aspects of binocular eye movement behavior during reading are scarce. In this study we have explored binocular strategies during reading and visual search tasks in a large population of dyslexic and typical readers. Binocular eye movements were recorded using a video-oculography system in 43 dyslexic children (aged 8–13) and in a group of 42 age-matched typical readers. The main findings are: (i) ocular motor characteristics of dyslexic children are impaired in comparison to those reported in typical children in reading task; (ii) a developmental effect exists in reading in control children, in dyslexic children the effect of development was observed only on fixation durations; and (iii) ocular motor behavior in the visual search tasks is similar for dyslexic children and for typical readers, except for the disconjugacy during and after the saccade: dyslexic children are impaired in comparison to typical children. Data reported here confirms and expands previous studies on children’s reading. Both reading skills and binocular saccades coordination improve with age in typical readers. The atypical eye movement’s patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an impairment of the ocular motor saccade and vergence systems interaction. PMID:25400559

  11. The right hemisphere is dominant in organization of visual search-A study in stroke patients.

    PubMed

    Ten Brink, Antonia F; Matthijs Biesbroek, J; Kuijf, Hugo J; Van der Stigchel, Stefan; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2016-05-01

    Cancellation tasks are widely used for diagnosis of lateralized attentional deficits in stroke patients. A disorganized fashion of target cancellation has been hypothesized to reflect disturbed spatial exploration. In the current study we aimed to examine which lesion locations result in disorganized visual search during cancellation tasks, in order to determine which brain areas are involved in search organization. A computerized shape cancellation task was administered in 78 stroke patients. As an index for search organization, the amount of intersections of paths between consecutive crossed targets was computed (i.e., intersections rate). This measure is known to accurately depict disorganized visual search in a stroke population. Ischemic lesions were delineated on CT or MRI images. Assumption-free voxel-based lesion-symptom mapping and region of interest-based analyses were used to determine the grey and white matter anatomical correlates of the intersections rate as a continuous measure. The right lateral occipital cortex, superior parietal lobule, postcentral gyrus, superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior longitudinal fasciculus, first branch of the superior longitudinal fasciculus (SLF I), and the inferior fronto-occipital fasciculus, were related to search organization. To conclude, a clear right hemispheric dominance for search organization was revealed. Further, the correlates of disorganized search overlap with regions that have previously been associated with conjunctive search and spatial working memory. This suggests that disorganized visual search is caused by disturbed spatial processes, rather than deficits in high level executive function or planning, which would be expected to be more related to frontal regions. PMID:26876010

  12. How do magnitude and frequency of monetary reward guide visual search?

    PubMed

    Won, Bo-Yeong; Leber, Andrew B

    2016-07-01

    How does reward guide spatial attention during visual search? In the present study, we examine whether and how two types of reward information-magnitude and frequency-guide search behavior. Observers were asked to find a target among distractors in a search display to earn points. We manipulated multiple levels of value across the search display quadrants in two ways: For reward magnitude, targets appeared equally often in each quadrant, and the value of each quadrant was determined by the average points earned per target; for reward frequency, we varied how often the target appeared in each quadrant but held the average points earned per target constant across the quadrants. In Experiment 1, we found that observers were highly sensitive to the reward frequency information, and prioritized their search accordingly, whereas we did not find much prioritization based on magnitude information. In Experiment 2, we found that magnitude information for a nonspatial feature (color) could bias search performance, showing that the relative insensitivity to magnitude information during visual search is not generalized across all types of information. In Experiment 3, we replicated the negligible use of spatial magnitude information even when we used limited-exposure displays to incentivize the expression of learning. In Experiment 4, we found participants used the spatial magnitude information during a modified choice task-but again not during search. Taken together, these findings suggest that the visual search apparatus does not equally exploit all potential sources of spatial value information; instead, it favors spatial reward frequency information over spatial reward magnitude information. PMID:27270595

  13. Contrasting vertical and horizontal representations of affect in emotional visual search.

    PubMed

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings. PMID:26106061

  14. Reward association facilitates distractor suppression in human visual search.

    PubMed

    Gong, Mengyuan; Yang, Feitong; Li, Sheng

    2016-04-01

    Although valuable objects are attractive in nature, people often encounter situations where they would prefer to avoid such distraction while focusing on the task goal. Contrary to the typical effect of attentional capture by a reward-associated item, we provide evidence for a facilitation effect derived from the active suppression of a high reward-associated stimulus when cuing its identity as distractor before the display of search arrays. Selection of the target is shown to be significantly faster when the distractors were in high reward-associated colour than those in low reward-associated or non-rewarded colours. This behavioural reward effect was associated with two neural signatures before the onset of the search display: the increased frontal theta oscillation and the strengthened top-down modulation from frontal to anterior temporal regions. The former suggests an enhanced working memory representation for the reward-associated stimulus and the increased need for cognitive control to override Pavlovian bias, whereas the latter indicates that the boost of inhibitory control is realized through a frontal top-down mechanism. These results suggest a mechanism in which the enhanced working memory representation of a reward-associated feature is integrated with task demands to modify attentional priority during active distractor suppression and benefit behavioural performance. PMID:26797805

  15. Searching for a major locus for male pattern baldness (MPB)

    SciTech Connect

    Anker, R.; Eisen, A.Z.; Donis-Keller, H.

    1994-09-01

    Male pattern baldness (MPB) is a common trait in post-pubertal males. Approximately 50% of adult males present some degree of MPB by age 50. According to the classification provided by Hamilton in 1951 and modified by Norwood in 1975, the trait itself is a continuum that ranges from mild (Type I) to severe (Type VII) cases. In addition, there is extensive variability for the age of onset. The role of androgens in allowing the expression of this trait in males has been well established. This phenotype is uncommonly expressed in females. The high prevalence of the trait, the distribution of MPB as a continuous trait, and several non-allelic mutations identified in the mouse capable of affecting hair pattern, suggest that MPB is genetically heterogeneous. In order to reduce the probability of multiple non-allelic MPB genes within a pedigree, we selected 9 families in which MPB appears to segregate exclusively through the paternal lineage as compared to bilineal pedigrees. There are 32 males expressing this phenotype and females are treated as phenotype unknown. In general, affected individuals expressed the trait before 30 years of age with a severity of at least Type III or IV. We assumed an autosomal dominant model, with a gene frequency of 1/20 for the affected allele, and 90% penetrance. Simulation studies using the SLINK program with these pedigrees showed that these families would be sufficient to detect linkage under the assumption of a single major locus. If heterogeneity is present, the current resource does not have sufficient power to detect linkage at a statistically significant level, although candidate regions of the genome could be identified for further studies with additional pedigrees. Using 53 highly informative microsatellite markers, and a subset of 7 families, we have screened 30% of the genome. This search included several regions where candidate genes for MPB are located.

  16. Assessing the benefits of stereoscopic displays to visual search: methodology and initial findings

    NASA Astrophysics Data System (ADS)

    Godwin, Hayward J.; Holliman, Nick S.; Menneer, Tamaryn; Liversedge, Simon P.; Cave, Kyle R.; Donnelly, Nicholas

    2015-03-01

    Visual search is a task that is carried out in a number of important security and health related scenarios (e.g., X-ray baggage screening, radiography). With recent and ongoing developments in the technology available to present images to observers in stereoscopic depth, there has been increasing interest in assessing whether depth information can be used in complex search tasks to improve search performance. Here we outline the methodology that we developed, along with both software and hardware information, in order to assess visual search performance in complex, overlapping stimuli that also contained depth information. In doing so, our goal is to foster further research along these lines in the future. We also provide an overview with initial results of the experiments that we have conducted involving participants searching stimuli that contain overlapping objects presented on different depth planes to one another. Thus far, we have found that depth information does improve the speed (but not accuracy) of search, but only when the stimuli are highly complex and contain a significant degree of overlap. Depth information may therefore aid real-world search tasks that involve the examination of complex, overlapping stimuli.

  17. The Mouse Model of Down Syndrome Ts65Dn Presents Visual Deficits as Assessed by Pattern Visual Evoked Potentials

    PubMed Central

    Scott-McKean, Jonah Jacob; Chang, Bo; Hurd, Ronald E.; Nusinowitz, Steven; Schmidt, Cecilia; Davisson, Muriel T.

    2010-01-01

    Purpose. The Ts65Dn mouse is the most complete widely available animal model of Down syndrome (DS). Quantitative information was generated about visual function in the Ts65Dn mouse by investigating their visual capabilities by means of electroretinography (ERG) and patterned visual evoked potentials (pVEPs). Methods. pVEPs were recorded directly from specific regions of the binocular visual cortex of anesthetized mice in response to horizontal sinusoidal gratings of different spatial frequency, contrast, and luminance generated by a specialized video card and presented on a 21-in. computer display suitably linearized by gamma correction. Results. ERG assessments indicated no significant deficit in retinal physiology in Ts65Dn mice compared with euploid control mice. The Ts65Dn mice were found to exhibit deficits in luminance threshold, spatial resolution, and contrast threshold, compared with the euploid control mice. The behavioral counterparts of these parameters are luminance sensitivity, visual acuity, and the inverse of contrast sensitivity, respectively. Conclusions. DS includes various phenotypes associated with the visual system, including deficits in visual acuity, accommodation, and contrast sensitivity. The present study provides electrophysiological evidence of visual deficits in Ts65Dn mice that are similar to those reported in persons with DS. These findings strengthen the role of the Ts65Dn mouse as a model for DS. Also, given the historical assumption of integrity of the visual system in most behavioral assessments of Ts65Dn mice, such as the hidden-platform component of the Morris water maze, the visual deficits described herein may represent a significant confounding factor in the interpretation of results from such experiments. PMID:20130276

  18. Display format and highlight validity effects on search performance using complex visual displays

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim; O'Brien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  19. Visual Intelligence: Using the Deep Patterns of Visual Language to Build Cognitive Skills

    ERIC Educational Resources Information Center

    Sibbet, David

    2008-01-01

    Thirty years of work as a graphic facilitator listening visually to people in every kind of organization has convinced the author that visual intelligence is a key to navigating an information economy rich with multimedia. He also believes that theory and disciplines developed by practitioners in this new field hold special promise for educators…

  20. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  1. Visual Search for Object Orientation Can Be Modulated by Canonical Orientation

    ERIC Educational Resources Information Center

    Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian

    2005-01-01

    The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…

  2. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  3. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  4. How You Move Is What You See: Action Planning Biases Selection in Visual Search

    ERIC Educational Resources Information Center

    Wykowska, Agnieszka; Schubo, Anna; Hommel, Bernhard

    2009-01-01

    Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection…

  5. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  6. The Development of Visual Search in Infancy: Attention to Faces versus Salience

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2016-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and…

  7. Earthdata Search: Methods for Improving Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.; Crouch, M.; Siarto, J.; Sun, B.

    2015-12-01

    In a landscape of heterogeneous data from diverse sources and disciplines, producing useful tools poses a significant challenge. NASA's Earthdata Search application tackles this challenge, enabling discovery and inter-comparison of data across the wide array of scientific disciplines that use NASA Earth observation data. During this talk, we will give a brief overview of the application, and then share our approach for understanding and satisfying the needs of users from several disparate scientific communities. Our approach involves: - Gathering fine-grained metrics to understand user behavior - Using metrics to quantify user success - Combining metrics, feedback, and user research to understand user needs - Applying professional design toward addressing user needs - Using metrics and A/B testing to evaluate the viability of changes - Providing enhanced features for services to promote adoption - Encouraging good metadata quality and soliciting feedback for metadata issues - Open sourcing the application and its components to allow it to serve more users

  8. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    PubMed Central

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  9. RF antenna-pattern visual aids for field use

    NASA Technical Reports Server (NTRS)

    Williams, J. H.

    1973-01-01

    Series of plots must be made of antenna pattern on polar-coordinate sheet depicting vertical planes. Separate sheets are plotted depicting antenna patterns in vertical plane at azimuth positions. After all polar plots are drawn, they are labeled according to their azimuthal positions. Transparencies are then stiffened with regular wire, cardboard, or molded plastic.

  10. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  11. Use Patterns of Visual Cues in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…

  12. Earthdata Search: Combining New Services and Technologies for Earth Science Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.

    2014-12-01

    A host of new services are revolutionizing discovery, visualization, and access of NASA's Earth science data holdings. At the same time, web browsers have become far more capable and open source libraries have grown to take advantage of these capabilities. Earthdata Search is a web application which combines modern browser features with the latest Earthdata services from NASA to produce a cutting-edge search and access client with features far beyond what was possible only a couple of years ago. Earthdata Search provides data discovery through the Common Metadata Repository (CMR), which provides a high-speed REST API for searching across hundreds of millions of data granules using temporal, spatial, and other constraints. It produces data visualizations by combining CMR data with Global Imagery Browse Services (GIBS) image tiles. Earthdata Search renders its visualizations using custom plugins built on Leaflet.js, a lightweight mobile-friendly open source web mapping library. The client further features an SVG-based interactive timeline view of search results. For data access, Earthdata Search provides easy temporal and spatial subsetting as well as format conversion by making use of OPeNDAP. While the client hopes to drive adoption of these services and standards, it provides fallback behavior for working with data that has not yet adopted them. This allows the client to remain on the cutting-edge of service offerings while still boasting a catalog containing thousands of data collections. In this session, we will walk through Earthdata Search and explain how it incorporates these new technologies and service offerings.

  13. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  14. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations. PMID:26872100

  15. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study

    PubMed Central

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-01-01

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in “pop-out” or “search” condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the “search” condition, while PPC is mainly involved in detecting “pop-out” targets. PMID:27452715

  16. VisualRank: applying PageRank to large-scale image search.

    PubMed

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines. PMID:18787237

  17. Cortical dynamics of contextually cued attentive visual learning and search: spatial and object evidence accumulation.

    PubMed

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-10-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. PMID:21038974

  18. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean

  19. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  20. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf.

    PubMed

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z; Zhang, Fan; Gonçalves, Óscar F; Fang, Fang; Bi, Yanchao

    2015-11-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus-a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  1. Face detection differs from categorization: evidence from visual search in natural scenes.

    PubMed

    Bindemann, Markus; Lewis, Michael B

    2013-12-01

    In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers' eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes. PMID:23645414

  2. Effect of pattern complexity on the visual span for Chinese and alphabet characters.

    PubMed

    Wang, Hui; He, Xuanzi; Legge, Gordon E

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity--lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  3. Effect of pattern complexity on the visual span for Chinese and alphabet characters

    PubMed Central

    Wang, Hui; He, Xuanzi; Legge, Gordon E.

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity—lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  4. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. PMID:26970441

  5. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  6. Differential effects of parietal and frontal inactivations on reaction times distributions in a visual search task

    PubMed Central

    Wardak, Claire; Ben Hamed, Suliann; Olivier, Etienne; Duhamel, Jean-René

    2012-01-01

    The posterior parietal cortex participates to numerous cognitive functions, from perceptual to attentional and decisional processes. However, the same functions have also been attributed to the frontal cortex. We previously conducted a series of reversible inactivations of the lateral intraparietal area (LIP) and of the frontal eye field (FEF) in the monkey which showed impairments in covert visual search performance, characterized mainly by an increase in the mean reaction time (RT) necessary to detect a contralesional target. Only subtle differences were observed between the inactivation effects in both areas. In particular, the magnitude of the deficit was dependant of search task difficulty for LIP, but not for FEF. In the present study, we re-examine these data in order to try to dissociate the specific involvement of these two regions, by considering the entire RT distribution instead of mean RT. We use the LATER model to help us interpret the effects of the inactivations with regard to information accumulation rate and decision processes. We show that: (1) different search strategies can be used by monkeys to perform visual search, either by processing the visual scene in parallel, or by combining parallel and serial processes; (2) LIP and FEF inactivations have very different effects on the RT distributions in the two monkeys. Although our results are not conclusive with regards to the exact functional mechanisms affected by the inactivations, the effects we observe on RT distributions could be accounted by an involvement of LIP in saliency representation or decision-making, and an involvement of FEF in attentional shifts and perception. Finally, we observe that the use of the LATER model is limited in the context of a visual search as it cannot fit all the behavioral strategies encountered. We propose that the diversity in search strategies observed in our monkeys also exists in individual human subjects and should be considered in future experiments. PMID

  7. Differential effects of parietal and frontal inactivations on reaction times distributions in a visual search task.

    PubMed

    Wardak, Claire; Ben Hamed, Suliann; Olivier, Etienne; Duhamel, Jean-René

    2012-01-01

    The posterior parietal cortex participates to numerous cognitive functions, from perceptual to attentional and decisional processes. However, the same functions have also been attributed to the frontal cortex. We previously conducted a series of reversible inactivations of the lateral intraparietal area (LIP) and of the frontal eye field (FEF) in the monkey which showed impairments in covert visual search performance, characterized mainly by an increase in the mean reaction time (RT) necessary to detect a contralesional target. Only subtle differences were observed between the inactivation effects in both areas. In particular, the magnitude of the deficit was dependant of search task difficulty for LIP, but not for FEF. In the present study, we re-examine these data in order to try to dissociate the specific involvement of these two regions, by considering the entire RT distribution instead of mean RT. We use the LATER model to help us interpret the effects of the inactivations with regard to information accumulation rate and decision processes. We show that: (1) different search strategies can be used by monkeys to perform visual search, either by processing the visual scene in parallel, or by combining parallel and serial processes; (2) LIP and FEF inactivations have very different effects on the RT distributions in the two monkeys. Although our results are not conclusive with regards to the exact functional mechanisms affected by the inactivations, the effects we observe on RT distributions could be accounted by an involvement of LIP in saliency representation or decision-making, and an involvement of FEF in attentional shifts and perception. Finally, we observe that the use of the LATER model is limited in the context of a visual search as it cannot fit all the behavioral strategies encountered. We propose that the diversity in search strategies observed in our monkeys also exists in individual human subjects and should be considered in future experiments. PMID

  8. Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

    2014-06-01

    Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

  9. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  10. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  11. Visual cluster analysis and pattern recognition template and methods

    SciTech Connect

    Osbourn, G.C.; Martinez, R.F.

    1993-12-31

    This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  12. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  13. Visual lateralization of pattern discrimination in the bottlenose dolphin (Tursiops truncatus).

    PubMed

    von Fersen, L; Schall, U; Güntürkün, O

    2000-01-01

    The aim of the present study was to investigate whether bottlenose dolphins have cerebral asymmetries of visual processing. The monocular performance of the adult dolphin Goliath was tested using a large number of simultaneous multiple pattern discrimination tasks. The experiments revealed a clear right eye advantage in the acquisition and the retention of pattern discriminations as well as asymmetries in the interhemispheric transfer of visual information. As a result of a complete decussation at the optic nerve, this right eye superiority is probably related to a left hemisphere dominance in visual processing. PMID:10628742

  14. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes. PMID:26892010

  15. Masked target transform volume clutter metric for human observer visual search modeling

    NASA Astrophysics Data System (ADS)

    Moore, Richard Kirk

    The Night Vision and Electronic Sensors Directorate (NVESD) develops an imaging system performance model to aid in the design and comparison of imaging systems for military use. It is intended to approximate visual task performance for a typical human observer with an imaging system of specified optical, electrical, physical, and environmental parameters. When modeling search performance, the model currently uses only target size and target-to-background contrast to describe a scene. The presence or absence of other non-target objects and textures in the scene also affect search performance, but NVESD's targeting task performance metric based time limited search model (TTP/TLS) does not currently account for them explicitly. Non-target objects in a scene that impact search performance are referred to as clutter. A universally accepted mathematical definition of clutter does not yet exist. Researchers have proposed a number of clutter metrics based on very different methods, but none account for display geometry or the varying spatial frequency sensitivity of the human visual system. After a review of the NVESD search model, properties of the human visual system, and a literature review of clutter metrics, the new masked target transform volume clutter metric will be presented. Next the results of an experiment designed to show performance variation due to clutter alone will be presented. Then, the results of three separate perception experiments using real or realistic search imagery will be used to show that the new clutter metric better models human observer search performance than the current NVESD model or any of the reviewed clutter metrics.

  16. The Accuracy of Saccadic and Perceptual Decisions in Visual Search

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Stone, Leland S.; Beutter, B. B.; Stone, Leland S. (Technical Monitor)

    1997-01-01

    Saccadic eye movements during search for a target embedded in noise are suboptimally guided by information about target location. Our goal is to compare the spatial information used to guide the saccades with that used for the perceptual decision. Three observers were asked to determine the location of a bright disk (diameter = 21 min) in white noise (signal-to-noise ratio = 4.2) from among 10 possible locations evenly spaced at 5.9 deg eccentricity. In the first of four conditions, observers used natural eye movements. In the three remaining conditions, observers fixated a central cross at all times. The fixation conditions consisted of three different presentation times (100, 200, 300 msec), each followed by a mask. Eye-position data were collected, with a resolution of (approximately) 0.2 deg. In the natural viewing condition, we measured. the accuracy with respect to the target and the latency of the first saccade. In the fixation conditions, we discarded trials in which observers broke fixation. Perceptual performance was computed for all conditions. Averaged across observers, the first saccade was correct (closest to the target location) for 56 +/- (SD) % of trials (chance = 10 %) and occurred after a latency of 313 +/- 56 msec. Perceptual performance averaged 53 +/- 4, 63 +/- 4, 65 +/- 2 % correct at 100, 200, and 300 msec, respectively. For the signal-to-noise ratio used, at the time of initiation of the first saccade, there is little difference between the amount of information about target location available to the perceptual and saccadic systems.

  17. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  18. Learning from data: recognizing glaucomatous defect patterns and detecting progression from visual field measurements.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Balasubramanian, Madhusudhanan; Medeiros, Felipe A; Zangwill, Linda M; Liebmann, Jeffrey M; Girkin, Christopher A; Weinreb, Robert N; Bowd, Christopher

    2014-07-01

    A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods. PMID:24710816

  19. Does focused endogenous attention prevent attentional capture in pop-out visual search?

    PubMed Central

    Seiss, Ellen; Kiss, Monika; Eimer, Martin

    2009-01-01

    To investigate whether salient visual singletons capture attention when they appear outside the current endogenous attentional focus, we measured the N2pc component as a marker of attentional capture in a visual search task where target or nontarget singletons were presented at locations previously cued as task-relevant, or in the uncued irrelevant hemifield. In two experiments, targets were either defined by colour, or by a combination of colour and shape. The N2pc was elicited both for attended singletons and for singletons on the uncued side, demonstrating that focused endogenous attention cannot prevent attentional capture by salient unattended visual events. However, N2pc amplitudes were larger for attended and unattended singletons that shared features with the current target, suggesting that top-down task sets modulate the capacity of visual singletons to capture attention both within and outside the current attentional focus. PMID:19473304

  20. Visualization of gunshot residue patterns on dark clothing.

    PubMed

    Atwater, Christina S; Durina, Marie E; Durina, John P; Blackledge, Robert D

    2006-09-01

    Determination of the muzzle-to-target distance is often a critical factor in criminal and civil investigations involving firearms. However, seeing and recording gunshot residue patterns can be difficult if the victim's clothing is dark and/or bloodstained. Trostle reported the use of infrared film for the detection of burn patterns. However, only after the film is developed are the results visible and multiple exposures at different settings may be needed. The Video Spectral Comparator 2000 (Foster & Freeman Ltd., Evesham, Worcestershire, U.K.) is an imaging instrument routinely used by forensic document examiners. Without use of specialized film could the VSC 2000 (at appropriate instrument settings) quickly, easily, and reliably provide instantaneous viewing, saving, and printing of gunshot residue patterns on dark and/or blood soaked clothing? At muzzle-to-target distances of 6, 12, and 18 in., test fires were made into five different types of dark clothing using eight different handguns of different calibers. Gunshot residues were detected for all eight calibers, and powder burn patterns were seen on dark clothing for all three target distances and calibers except 0.22 long rifle and 0.25 ACP. Bloodstains did not preclude the viewing of these patterns. PMID:17018087

  1. Visual search performance of patients with vision impairment: Effect of JPEG image enhancement

    PubMed Central

    Luo, Gang; Satgunam, PremNandhini; Peli, Eli

    2012-01-01

    Purpose To measure natural image search performance in patients with central vision impairment. To evaluate the performance effect for a JPEG based image enhancement technique using the visual search task. Method 150 JPEG images were presented on a touch screen monitor in either an enhanced or original version to 19 patients (visual acuity 0.4 to 1.2 logMAR, 6/15 to 6/90, 20/50 to 20/300) and 7 normally sighted controls (visual acuity −0.12 to 0.1 logMAR, 6/4.5 to 6/7.5, 20/15 to 20/25). Each image fell into one of three categories: faces, indoors, and collections. The enhancement was realized by moderately boosting a mid-range spatial frequency band in the discrete cosine transform (DCT) coefficients of the image luminance component. Participants pointed to an object in a picture that matched a given target displayed at the upper-left corner of the monitor. Search performance was quantified by the percentage of correct responses, the median search time of correct responses, and an “integrated performance” measure – the area under the curve of cumulative correct response rate over search time. Results Patients were able to perform the search tasks but their performance was substantially worse than the controls. Search performances for the 3 image categories were significantly different (p≤0.001) for all the participants, with searching for faces being the most difficult. When search time and correct response were analyzed separately, the effect of enhancement led to increase in one measure but decrease in another for many patients. Using the integrated performance, it was found that search performance declined with decrease in acuity (p=0.005). An improvement with enhancement was found mainly for the patients whose acuity ranged from 0.4 to 0.8 logMAR (6/15 to 6/38, 20/50 to 20/125). Enhancement conferred a small but significant improvement in integrated performance for indoor and collection images (p=0.025) in the patients. Conclusion Search performance

  2. A modified mirror projection visual evoked potential stimulator for presenting patterns in different orientations.

    PubMed

    Taylor, P K; Wynn-Williams, G M

    1986-07-01

    Modifications to a standard mirror projection visual evoked potential stimulator are described to enable projection of patterns in varying orientations. The galvanometer-mirror assembly is mounted on an arm which can be rotated through 90 degrees. This enables patterns in any orientation to be deflected perpendicular to their axes. PMID:2424725

  3. Increased Vulnerability to Pattern-Related Visual Stress in Myalgic Encephalomyelitis.

    PubMed

    Wilson, Rachel L; Paterson, Kevin B; Hutchinson, Claire V

    2015-12-01

    The objective of this study was to determine vulnerability to pattern-related visual stress in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS). A total of 20 ME/CFS patients and 20 matched (age, gender) controls were recruited to the study. Pattern-related visual stress was determined using the Pattern Glare Test. Participants viewed three patterns, the spatial frequencies (SF) of which were 0.3 (low-SF), 2.3 (mid-SF), and 9.4 (high-SF) cycles per degree (c/deg). They reported the number of distortions they experienced when viewing each pattern. ME/CFS patients exhibited significantly higher pattern glare scores than controls for the mid-SF pattern. Mid-high SF differences were also significantly higher in patients than controls. These findings provide evidence of altered visual perception in ME/CFS. Pattern-related visual stress may represent an identifiable clinical feature of ME/CFS that will prove useful in its diagnosis. However, further research is required to establish if these symptoms reflect ME/CFS-related changes in the functioning of sensory neural pathways. PMID:26562880

  4. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  5. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children

    PubMed Central

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  6. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children.

    PubMed

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5-7, 8-10, and 11-15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  7. Global statistical regularities modulate the speed of visual search in patients with focal attentional deficits

    PubMed Central

    Lanzoni, Lucilla; Melcher, David; Miceli, Gabriele; Corbett, Jennifer E.

    2014-01-01

    There is growing evidence that the statistical properties of ensembles of similar objects are processed in a qualitatively different manner than the characteristics of individual items. It has recently been proposed that these types of perceptual statistical representations are part of a strategy to complement focused attention in order to circumvent the visual system’s limited capacity to represent more than a few individual objects in detail. Previous studies have demonstrated that patients with attentional deficits are nonetheless sensitive to these sorts of statistical representations. Here, we examined how such global representations may function to aid patients in overcoming focal attentional limitations by manipulating the statistical regularity of a visual scene while patients performed a search task. Three patients previously diagnosed with visual neglect searched for a target Gabor tilted to the left or right of vertical in displays of horizontal distractor Gabors. Although the local sizes of the distractors changed on every trial, the mean size remained stable for several trials. Patients made faster correct responses to targets in neglected regions of the visual field when global statistics remained constant over several trials, similar to age-matched controls. Given neglect patients’ attentional deficits, these results suggest that stable perceptual representations of global statistics can establish a context to speed search without the need to represent individual elements in detail. PMID:24971066

  8. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

    PubMed

    Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

    2015-01-01

    Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend. PMID:26541054

  9. Task-dependent modulation of word processing mechanisms during modified visual search tasks.

    PubMed

    Dampure, Julien; Benraiss, Abdelrhani; Vibert, Nicolas

    2016-01-01

    During visual search for words, the impact of the visual and semantic features of words varies as a function of the search task. This event-related potential (ERP) study focused on the way these features of words are used to detect similarities between the distractor words that are glanced at and the target word, as well as to then reject the distractor words. The participants had to search for a target word that was either given literally or defined by a semantic clue among words presented sequentially. The distractor words included words that resembled the target and words that were semantically related to the target. The P2a component was the first component to be modulated by the visual and/or semantic similarity of distractors to the target word, and these modulations varied according to the task. The same held true for the later N300 and N400 components, which confirms that, depending on the task, distinct processing pathways were sensitized through attentional modulation. Hence, the process that matches what is perceived with the target acts during the first 200 ms after word presentation, and both early detection and late rejection processes of words depend on the search task and on the representation of the target stored in memory. PMID:26176489

  10. Toddlers with Autism Spectrum Disorder are more successful at visual search than typically developing toddlers

    PubMed Central

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O’Riordan and colleagues (Plaisted, O’Riordan & Baron-Cohen, 1998; O’Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very young children (1–3-year-olds) – both typically developing or with ASD. We used an eye-tracker to measure looking behavior, providing fine-grained measures of visual search in 2.5-year-old toddlers with and without ASD (this representing the age by which many children may first receive a diagnosis of ASD). Importantly, our paradigm required no verbal instructions or feedback, making the task appropriate for toddlers who are pre- or nonverbal. We found that toddlers with ASD were more successful at finding the target than typically developing, age-matched controls. Further, our paradigm allowed us to estimate the number of items scrutinized per trial, revealing that for large set size conjunctive search, toddlers with ASD scrutinized as many as twice the number of items as typically developing toddlers, in the same amount of time. PMID:21884314

  11. Evaluation of a prototype search and visualization system for exploring scientific communities.

    PubMed

    Bales, Michael E; Kaufman, David R; Johnson, Stephen B

    2009-01-01

    Searches of bibliographic databases generate lists of articles but do little to reveal connections between authors, institutions, and grants. As a result, search results cannot be fully leveraged. To address this problem we developed Sciologer, a prototype search and visualization system. Sciologer presents the results of any PubMed query as an interactive network diagram of the above elements. We conducted a cognitive evaluation with six neuroscience and six obesity researchers. Researchers used the system effectively. They used geographic, color, and shape metaphors to describe community structure and made accurate inferences pertaining to a) collaboration among research groups; b) prominence of individual researchers; and c) differentiation of expertise. The tool confirmed certain beliefs, disconfirmed others, and extended their understanding of their own discipline. The majority indicated the system offered information of value beyond a traditional PubMed search and that they would use the tool if available. PMID:20351816

  12. Visual search efficiency is greater for human faces compared to animal faces.

    PubMed

    Simpson, Elizabeth A; Husband, Haley L; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V

    2014-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similar searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than fixations on primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  13. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  14. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  15. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  16. Comparison of the reliability of multifocal visual evoked cortical potentials generated by pattern reversal and pattern pulse stimulation.

    PubMed

    Souza, G S; Schakelford, H B; Moura, A L A; Gomes, B D; Ventura, D F; Fitzgerald, M E C; Silveira, L C L

    2012-10-01

    This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols. PMID:22782556

  17. Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data.

    PubMed

    Bach, Benjamin; Shi, Conglei; Heulot, Nicolas; Madhyastha, Tara; Grabowski, Tom; Dragicevic, Pierre

    2016-01-01

    We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets. PMID:26529718

  18. PATTERN REVERSAL VISUAL EVOKED POTENTIALS IN AWAKE RATS

    EPA Science Inventory

    A method for recording pattern reversal evoked potentials (PREPs) from awake restrained rats has been developed. The procedure of Onofrj et al. was modified to eliminate the need for anesthetic, thereby avoiding possible interactions of the anesthetic with other manipulations of ...

  19. [Discordant pattern, visual identification of myocardial viability with PET].

    PubMed

    Alexánderson, E; Ricalde, A; Zerón, J; Talayero, J A; Cruz, P; Adame, G; Mendoza, G; Meave, A

    2006-01-01

    PET (positron emission tomography) as a non-invasive imaging method for studying cardiac perfusion and metabolism has turned into the gold standard for detecting myocardial viability. The utilization of 18 FDG as a tracer for its identification permits to spot the use of exogenous glucose by the myocardium segments. By studying and comparing viability and perfusion results, for which the latter uses tracers such as 13N-ammonia, three different patterns for myocardial viability evaluation arise:. transmural concordant pattern, non-transmural concordant pattern, and the discordant pattern; the last one exemplifies the hibernating myocardium and proves the presence of myocardial viability. The importance of its detection is fundamental for the study of an ischemic patient, since it permits the establishment of and exact diagnosis, prognosis, and the best treatment option. It also allows foreseeing functional recovery of the affected region as well as the ejection fraction rate after revascularization treatment if this is determined as necessary. All these elements regarding viability are determinant in order to reduce adverse events and help improving patients' prognosis. PMID:17315610

  20. The NLP Swish Pattern: An Innovative Visualizing Technique.

    ERIC Educational Resources Information Center

    Masters, Betsy J.; And Others

    1991-01-01

    Describes swish pattern, one of many innovative therapeutic interventions that developers of neurolinguistic programing (NLP) have contributed to counseling profession. Presents brief overview of NLP followed by an explanation of the basic theory and expected outcomes of the swish. Presents description of the intervention process and case studies…

  1. Color is processed less efficiently than orientation in change detection but more efficiently in visual search.

    PubMed

    Huang, Liqiang

    2015-05-01

    Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. PMID:25834029

  2. Activity in V4 reflects the direction, but not the latency, of saccades during visual search.

    PubMed

    Gee, Angela L; Ipata, Anna E; Goldberg, Michael E

    2010-10-01

    We constantly make eye movements to bring objects of interest onto the fovea for more detailed processing. Activity in area V4, a prestriate visual area, is enhanced at the location corresponding to the target of an eye movement. However, the precise role of activity in V4 in relation to these saccades and the modulation of other cortical areas in the oculomotor system remains unknown. V4 could be a source of visual feature information used to select the eye movement, or alternatively, it could reflect the locus of spatial attention. To test these hypotheses, we trained monkeys on a visual search task in which they were free to move their eyes. We found that activity in area V4 reflected the direction of the upcoming saccade but did not predict the latency of the saccade in contrast to activity in the lateral intraparietal area (LIP). We suggest that the signals in V4, unlike those in LIP, are not directly involved in the generation of the saccade itself but rather are more closely linked to visual perception and attention. Although V4 and LIP have different roles in spatial attention and preparing eye movements, they likely perform complimentary processes during visual search. PMID:20610790

  3. Adding a Visualization Feature to Web Search Engines: It’s Time

    SciTech Connect

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  4. Blaming the victims of your own mistakes: How visual search accuracy influences evaluation of stimuli.

    PubMed

    Chetverikov, Andrey; Jóhannesson, Ómar I; Kristjánsson, Árni

    2015-01-01

    Even without explicit positive or negative reinforcement, experiences may influence preferences. According to the affective feedback in hypotheses testing account preferences are determined by the accuracy of hypotheses: correct hypotheses evoke positive affect, while incorrect ones evoke negative affect facilitating changes of hypotheses. Applying this to visual search, we suggest that accurate search should lead to more positive ratings of targets than distractors, while for errors targets should be rated more negatively. We test this in two experiments using time-limited search for a conjunction of gender and tint of faces. Accurate search led to more positive ratings for targets as compared to distractors or targets following errors. Errors led to more negative ratings for targets than for distractors. Critically, eye tracking revealed that the longer the fixation dwell times in target regions, the higher the target ratings for correct responses, and the lower the ratings for errors. The longer observers look at targets, the more positive their ratings if they answer correctly, and less positive, following errors. The findings support the affective feedback account and provide the first demonstration of negative effects on liking ratings following errors in visual search. PMID:25319749

  5. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  6. In visual search, guidance by surface type is different than classic guidance

    PubMed Central

    Wolfe, Jeremy M; Reijnen, Ester; Van Wert, Michael J.; Kuzmova, Yoana

    2009-01-01

    Visual search for targets among distractors is more efficient if attention can be guided to targets by attributes like color. In real-world search, we guide attention using information about surfaces. (e.g., paintings are on walls). We compare “classic” color guidance to surface guidance in “scenes” of cubes. When a target can lie on one of many surfaces, color guidance is effective but surface guidance is not (Exp. 1-3). Surface guidance works when cued surfaces are coplanar (Exp. 4) or few in number (Exp. 5). We speculate that surface guidance is slow and limited to very few surfaces at one time. PMID:19236891

  7. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  8. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the

  9. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  10. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  11. Analysis and modeling of fixation point selection for visual search in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Hoffman, James; Ruda, Harald

    2000-07-01

    Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.

  12. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-06-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database. PMID:25861807

  13. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  14. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    PubMed

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). PMID:25576537

  15. Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System

    PubMed Central

    Manduchi, R.; Coughlan, J.; Ivanchenko, V.

    2016-01-01

    We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755

  16. Improvement in Visual Search with Practice: Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    PubMed Central

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus–response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior–contralateral component (N2pc, 170–250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300–400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. PMID:25834059

  17. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    PubMed Central

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  18. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-01-01

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. PMID:25378369

  19. Influence of being videotaped on the prevalence effect during visual search

    PubMed Central

    Miyazaki, Yuki

    2015-01-01

    Video monitoring modifies the task performance of those who are being monitored. The current study aims to prevent rare target-detection failures during visual search through the use of video monitoring. Targets are sometimes missed when their prevalence during visual search is extremely low (e.g., in airport baggage screenings). Participants performed a visual search in which they were required to discern the presence of a tool in the midst of other objects. The participants were monitored via video cameras as they performed the task in one session (the videotaped condition), and they performed the same task in another session without being monitored (the non-videotaped condition). The results showed that fewer miss errors occurred in the videotaped condition, regardless of target prevalence. It appears that the decrease in misses in the video monitoring condition resulted from a shift in criterion location. Video monitoring is considered useful in inducing accurate scanning. It is possible that the potential for evaluation involved in being observed motivates the participants to perform well and is related to the shift in criterion. PMID:25999895

  20. Reduced posterior parietal cortex activation after training on a visual search task.

    PubMed

    Bueichekú, Elisenda; Miró-Padilla, Anna; Palomar-García, María-Ángeles; Ventura-Campos, Noelia; Parcet, María-Antonia; Barrós-Loscertales, Alfonso; Ávila, César

    2016-07-15

    Gaining experience on a cognitive task improves behavioral performance and is thought to enhance brain efficiency. Despite the body of literature already published on the effects of training on brain activation, less research has been carried out on visual search attention processes under well controlled conditions. Thirty-six healthy adults divided into trained and control groups completed a pre-post letter-based visual search task fMRI study in one day. Twelve letters were used as targets and ten as distractors. The trained group completed a training session (840 trials) with half the targets between scans. The effects of training were studied at the behavioral and brain levels by controlling for repetition effects using both between-subjects (trained vs. control groups) and within-subject (trained vs. untrained targets) controls. The trained participants reduced their response speed by 31% as a result of training, maintaining their accuracy scores, whereas the control group hardly changed. Neural results revealed that brain changes associated with visual search training were circumscribed to reduced activation in the posterior parietal cortex (PPC) when controlling for group, and they included inferior occipital areas when controlling for targets. The observed behavioral and brain changes are discussed in relation to automatic behavior development. The observed training-related decreases could be associated with increased neural efficiency in specific key regions for task performance. PMID:27132048

  1. Modeling the effect of selection history on pop-out visual search.

    PubMed

    Tseng, Yuan-Chi; Glaser, Joshua I; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  2. Allocation of cognitive resources in comparative visual search--individual and task dependent effects.

    PubMed

    Hardiess, Gregor; Mallot, Hanspeter A

    2015-08-01

    Behaviors recruit multiple, mutually substitutable types of cognitive resources (e.g., data acquisition and memorization in comparative visual search), and the allocation of resources is performed in a cost-optimizing way. If costs associated with each type of resource are manipulated, e.g., by varying the complexity of the items studied or the visual separation of the arrays to be compared, according adjustments of resource allocation ("trade-offs") have been demonstrated. Using between-subject designs, previous studies showed overall trade-off behavior but neglected inter-individual variability of trade-off behavior. Here, we present a simplified paradigm for comparative visual search in which gaze-measurements are replaced by switching of a visual mask covering one stimulus array at a time. This paradigm allows for a full within-subject design. While overall trade-off curves could be reproduced, we found that each subject used a specific trade-off strategy which differ substantially between subjects. Still, task-dependent adjustment of resource allocation can be demonstrated but accounts only for a minor part of the overall trade-off range. In addition, we show that the individual trade-offs were adjusted in an unconscious and rather intuitive way, enabling a robust manifestation of the selected strategy space. PMID:26093155

  3. Both memory and attention systems contribute to visual search for targets cued by implicitly learned context.

    PubMed

    Giesbrecht, Barry; Sy, Jocelyn L; Guerin, Scott A

    2013-06-01

    Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants' subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047

  4. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  5. Model of visual contrast gain control and pattern masking

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Solomon, J. A.

    1997-01-01

    We have implemented a model of contrast gain and control in human vision that incorporates a number of key features, including a contrast sensitivity function, multiple oriented bandpass channels, accelerating nonlinearities, and a devisive inhibitory gain control pool. The parameters of this model have been optimized through a fit to the recent data that describe masking of a Gabor function by cosine and Gabor masks [J. M. Foley, "Human luminance pattern mechanisms: masking experiments require a new model," J. Opt. Soc. Am. A 11, 1710 (1994)]. The model achieves a good fit to the data. We also demonstrate how the concept of recruitment may accommodate a variant of this model in which excitatory and inhibitory paths have a common accelerating nonlinearity, but which include multiple channels tuned to different levels of contrast.

  6. Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.

    PubMed

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria

    2014-11-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information. PMID:24898908

  7. Visual motion modulates pattern sensitivity ahead, behind, and beside motion.

    PubMed

    Arnold, Derek H; Marinovic, Welber; Whitney, David

    2014-05-01

    Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another - in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input - a 'predictive summation'. Another possible explanation is a phase sensitive 'spatial summation', a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position - it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250

  8. THE DEPENDENCE OF VISUAL SCANNING PERFORMANCE ON SEARCH DIRECTION AND DIFFICULTY

    PubMed Central

    Phillips, Matthew H.; Edelman, Jay A.

    2009-01-01

    Phillips & Edelman (2008) presented evidence that performance variability in a visual scanning task depended on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflected perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze. PMID:18640144

  9. The dynamics of attentional sampling during visual search revealed by Fourier analysis of periodic noise interference.

    PubMed

    Dugué, Laura; Vanrullen, Rufin

    2014-01-01

    What are the temporal dynamics of perceptual sampling during visual search tasks, and how do they differ between a difficult (or inefficient) and an easy (or efficient) task? Does attention focus intermittently on the stimuli, or are the stimuli processed continuously over time? We addressed these questions by way of a new paradigm using periodic fluctuations of stimulus information during a difficult (color-orientation conjunction) and an easy (+ among Ls) search task. On each stimulus, we applied a dynamic visual noise that oscillated at a given frequency (2-20 Hz, 2-Hz steps) and phase (four cardinal phase angles) for 500 ms. We estimated the dynamics of attentional sampling by computing an inverse Fourier transform on subjects' d-primes. In both tasks, the sampling function presented a significant peak at 2 Hz; we showed that this peak could be explained by nonperiodic search strategies such as increased sensitivity to stimulus onset and offset. Specifically in the difficult task, however, a second, higher-frequency peak was observed at 9 to 10 Hz, with a similar phase for all subjects; this isolated frequency component necessarily entails oscillatory attentional dynamics. In a second experiment, we presented difficult search arrays with dynamic noise that was modulated by the previously obtained grand-average attention sampling function or by its converse function (in both cases omitting the 2 Hz component to focus on genuine oscillatory dynamics). We verified that performance was higher in the latter than in the former case, even for subjects who had not participated in the first experiment. This study supports the idea of a periodic sampling of attention during a difficult search task. Although further experiments will be needed to extend these findings to other search tasks, the present report validates the usefulness of this novel paradigm for measuring the temporal dynamics of attention. PMID:24525262

  10. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and

  11. Job Search Patterns of College Graduates: The Role of Social Capital

    ERIC Educational Resources Information Center

    Coonfield, Emily S.

    2012-01-01

    This dissertation addresses job search patterns of college graduates and the implications of social capital by race and class. The purpose of this study is to explore (1) how the job search transpires for recent college graduates, (2) how potential social networks in a higher educational context, like KU, may make a difference for students with…

  12. A Visualization System for Space-Time and Multivariate Patterns (VIS-STAMP)

    PubMed Central

    Guo, Diansheng; Chen, Jin; MacEachren, Alan M.; Liao, Ke

    2011-01-01

    The research reported here integrates computational, visual, and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial, and temporal dimensions via clustering, sorting, and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display, and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and, through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 Contest data set, which contains time-varying, geographically referenced, and multivariate data for technology companies in the US. PMID:17073369

  13. Patterned-String Tasks: Relation between Fine Motor Skills and Visual-Spatial Abilities in Parrots

    PubMed Central

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

  14. Visualizing Nanoscopic Topography and Patterns in Freely Standing Thin Films

    NASA Astrophysics Data System (ADS)

    Sharma, Vivek; Zhang, Yiran; Yilixiati, Subinuer

    Thin liquid films containing micelles, nanoparticles, polyelectrolyte-surfactant complexes and smectic liquid crystals undergo thinning in a discontinuous, step-wise fashion. The discontinuous jumps in thickness are often characterized by quantifying changes in the intensity of reflected monochromatic light, modulated by thin film interference from a region of interest. Stratifying thin films exhibit a mosaic pattern in reflected white light microscopy, attributed to the coexistence of domains with various thicknesses, separated by steps. Using Interferometry Digital Imaging Optical Microscopy (IDIOM) protocols developed in the course of this study, we spatially resolve for the first time, the landscape of stratifying freely standing thin films. We distinguish nanoscopic rims, mesas and craters, and follow their emergence and growth. In particular, for thin films containing micelles of sodium dodecyl sulfate (SDS), these topological features involve discontinuous, thickness transitions with concentration-dependent steps of 5-25 nm. These non-flat features result from oscillatory, periodic, supramolecular structural forces that arise in confined fluids, and arise due to complex coupling of hydrodynamic and thermodynamic effects at the nanoscale.

  15. Student Written Errors and Teacher Marking: A Search for Patterns.

    ERIC Educational Resources Information Center

    Belanger, J. F.

    A study examined whether patterns exist in the kinds and amounts of writing errors students make and whether teachers follow any sort of pattern in correcting these errors. Sixty compositions, gathered from a twelfth grade class taught by one teacher, were analyzed using the "McGraw-Hill Handbook of English." Student written errors were classified…

  16. Visual illusions in predator-prey interactions: birds find moving patterned prey harder to catch.

    PubMed

    Hämäläinen, Liisa; Valkonen, Janne; Mappes, Johanna; Rojas, Bibiana

    2015-09-01

    Several antipredator strategies are related to prey colouration. Some colour patterns can create visual illusions during movement (such as motion dazzle), making it difficult for a predator to capture moving prey successfully. Experimental evidence about motion dazzle, however, is still very scarce and comes only from studies using human predators capturing moving prey items in computer games. We tested a motion dazzle effect using for the first time natural predators (wild great tits, Parus major). We used artificial prey items bearing three different colour patterns: uniform brown (control), black with elongated yellow pattern and black with interrupted yellow pattern. The last two resembled colour patterns of the aposematic, polymorphic dart-poison frog Dendrobates tinctorius. We specifically tested whether an elongated colour pattern could create visual illusions when combined with straight movement. Our results, however, do not support this hypothesis. We found no differences in the number of successful attacks towards prey items with different patterns (elongated/interrupted) moving linearly. Nevertheless, both prey types were significantly more difficult to catch compared to the uniform brown prey, indicating that both colour patterns could provide some benefit for a moving individual. Surprisingly, no effect of background (complex vs. plain) was found. This is the first experiment with moving prey showing that some colour patterns can affect avian predators' ability to capture moving prey, but the mechanisms lowering the capture rate are still poorly understood. PMID:25947086

  17. Optimization of boiling water reactor control rod patterns using linear search

    SciTech Connect

    Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.

    1984-10-01

    A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor.

  18. Effect of experimental scotoma size and shape on the binocular and monocular pattern visual evoked potential.

    PubMed

    Geer, I; Spafford, M M

    1994-01-01

    A small experimental, central scotoma significantly attenuates the human pattern visual evoked potential. The steady-state pattern visual evoked potential was recorded from seven visually normal adults who viewed a reversing checkerboard with 24' checks and a central scotoma that varied in size and shape. We found that square scotomas had to be at least 3 x 3 degrees to significantly (p < 0.05) attenuate the pattern visual evoked potential. Receptor density has been shown to be greater along the horizontal meridian than the vertical meridian. We hypothesized that this results in greater cortical representation of the horizontal meridian than the vertical meridian and, therefore, the pattern visual evoked potential might be significantly attenuated by a smaller rectangular scotoma oriented along the horizontal meridian than along the vertical meridian. One dimension of the rectangular scotoma was fixed at either 1 degree or 3 degrees, while the other dimension was varied from 1 degree to 8 degrees. The threshold scotoma size that significantly (p < 0.05) attenuated the pattern visual evoked potential was a horizontal scotoma subtending 1 x 4 degrees and a vertical scotoma subtending 5 x 1 degree (vertical x horizontal). Meridional differences in cortical representation were not apparent to the larger scotoma series in which the fixed dimension subtended 3 degrees (3 x 2 degrees and 2 x 3 degrees). Further analysis of the data revealed that the apparent meridional difference for the 1 degree scotoma series was a function of data variability. The determinant of the PVEP amplitude was scotoma area, not orientation. Monocular and binocular threshold scotoma sizes were the same, which could be due to the level of binocular summation demonstrated by our subjects. PMID:7813381

  19. Using multidimensional scaling to quantify similarity in visual search and beyond.

    PubMed

    Hout, Michael C; Godwin, Hayward J; Fitzsimmons, Gemma; Robbins, Arryn; Menneer, Tamaryn; Goldinger, Stephen D

    2016-01-01

    Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one's keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multi-dimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique. PMID:26494381

  20. Gender Differences in Patterns of Searching the Web

    ERIC Educational Resources Information Center

    Roy, Marguerite; Chi, Michelene T. H.

    2003-01-01

    There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

  1. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-01

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. PMID:26156982

  2. Color names, color categories, and color-cued visual search: Sometimes, color perception is not categorical

    PubMed Central

    Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M

    2011-01-01

    The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188

  3. Learning and Retention of Concepts Formed from Unfamiliar Visual Patterns. Final Report.

    ERIC Educational Resources Information Center

    Lantz, Alma E.

    Two experiments were conducted to investigate the learning and retention of concepts formed from novel visual stimulus materials (wave-form patterns). The purpose of the first experiment was to scale sets of wave forms as a function of difficulty, i.e., subjects were shown a prototype wave form and were asked to give same-different judgments for…

  4. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  5. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  6. Visual Scanning Patterns of Adolescents with Mental Retardation during Tracing and Copying Tasks.

    ERIC Educational Resources Information Center

    Kamon, Tetsuji; Fujita, Tsugumichi Peter

    1994-01-01

    Visual scanning patterns of 17 students with mental retardation and control groups matched for chronological or mental age were recorded during visuomotor tasks. Results suggested that subjects paid more attention to penpoints than to the succeeding or passed points of a model line, indicating that they have a poorer ability to process more than…

  7. STATIONARY PATTERN ADAPTATION AND THE EARLY COMPONENTS IN HUMAN VISUAL EVOKED POTENTIALS

    EPA Science Inventory

    Pattern-onset visual evoked potentials were elicited from humans by sinusoidal gratings of 0.5., 1, 2 and 4 cpd (cycles/degree) following adaptation to a blank field or one of the gratings. The wave forms recorded after blank field adaptation showed an early positive component, P...

  8. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  9. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  10. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  11. Varying target prevalence reveals two, dissociable decision criteria in visual search

    PubMed Central

    Wolfe, Jeremy M; Van Wert, Michael J

    2009-01-01

    Summary Target prevalence exerts a powerful influence on visual search behavior. In most visual search experiments, targets appear on at least 50% of trials [1–3]. However, when targets are rare (as in medical or airport screening), observers shift response criteria, leading to elevated rates of miss errors [4, 5]. Observers also speed their target-absent responses and may make more motor errors [6]. This could be a speed-accuracy tradeoff with fast, frequent absent responses producing more miss errors. Disproving this hypothesis, Experiment One shows that very high target prevalence (98%) shifts response criteria in the opposite direction, leading to elevated false alarms in a simulated baggage search task. However, the very frequent target present responses are not speeded. Rather, rare target absent responses are greatly slowed. In Experiment Two, prevalence was varied sinusoidally over 1000 trials as observers’ accuracy and reaction times (RTs) were measured. Observers’ criterion and target absent RTs tracked prevalence. Sensitivity (d′) and target-present RTs did not vary with prevalence [see also 7, 8, 9]. The results support a model in which prevalence influences two parameters: A decision criterion governing the series of perceptual decisions about each attended item and a quitting threshold that governs the timing of target-absent responses. Models in which target prevalence only influences an overall decision criterion are not supported. PMID:20079642

  12. Color Channels, Not Color Appearance or Color Categories, Guide Visual Search for Desaturated Color Targets

    PubMed Central

    Lindsey, Delwin T.; Brown, Angela M.; Reijnen, Ester; Rich, Anina N.; Kuzmova, Yoana I.; Wolfe, Jeremy M.

    2011-01-01

    In this article, we report that in visual search, desaturated reddish targets are much easier to find than other desaturated targets, even when perceptual differences between targets and distractors are carefully equated. Observers searched for desaturated targets among mixtures of white and saturated distractors. Reaction times were hundreds of milliseconds faster for the most effective (reddish) targets than for the least effective (purplish) targets. The advantage for desaturated reds did not reflect an advantage for the lexical category “pink,” because reaction times did not follow named color categories. Many pink stimuli were not found quickly, and many quickly found stimuli were not labeled “pink.” Other possible explanations (e.g., linear-separability effects) also failed. Instead, we propose that guidance of visual search for desaturated colors is based on a combination of low-level color-opponent signals that is different from the combinations that produce perceived color. We speculate that this guidance might reflect a specialization for human skin. PMID:20713637

  13. Set-size effects in simple visual search for contour curvature.

    PubMed

    Sakai, Koji; Morishita, Masanao; Matsumoto, Hirofumi

    2007-01-01

    In a visual-search paradigm, both perception and decision processes contribute to the set-size effects. Using yes - no search tasks in set sizes from 2 to 8 for contour curvature, we examined whether the set-size effects are predicted by either the limited-capacity model or the decision-noise model. There are limitations in perception and decision-making in the limited-capacity model, but only in decision-making in the decision-noise model. The results of four experiments showed that the slopes of the logarithm of threshold plotted against the logarithm of set size ranged from 0.24 to 0.32, when the curvature was high or low, contour convexity was upward or downward, and stimulus was masked or unmasked. These slopes were closer to the prediction of 0.23 by the decision-noise model than that of 0.73 by the limited-capacity model. We interpret this that in simple visual search for contour curvature, the decision noise mainly affects the set-size effects and perceptual capacity is not limited. PMID:17455749

  14. Building ensemble representations: How the shape of preceding distractor distributions affects visual search.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2016-08-01

    Perception allows us to extract information about regularities in the environment. Observers can quickly determine summary statistics of a group of objects and detect outliers. The existing body of research has, however, not revealed how such ensemble representations develop over time. Moreover, the correspondence between the physical distribution of features in the external world and their potential internal representation as a probability density function (PDF) by the visual system is still unknown. Here, for the first time we demonstrate that such internal PDFs are built during visual search and show how they can be assessed with repetition and role-reversal effects. Using singleton search for an oddly oriented target line among differently oriented distractors (a priming of pop-out paradigm), we test how different properties of previously observed distractor distributions (mean, variability, and shape) influence search times. Our results indicate that observers learn properties of distractor distributions over and above mean and variance; in fact, response times also depend on the shape of the preceding distractor distribution. Response times decrease as a function of target distance from the mean of preceding Gaussian distractor distributions, and the decrease is steeper when preceding distributions have small standard deviations. When preceding distributions are uniform, however, this decrease in response times can be described by a two-piece function corresponding to the uniform distribution PDF. Moreover, following skewed distributions response times function is skewed in accordance with the skew in distributions. Indeed, internal PDFs seem to be specifically tuned to the observed feature distribution. PMID:27232163

  15. Patterns and Sequences of Multiple Query Reformulations in Web Searching: A Preliminary Study.

    ERIC Educational Resources Information Center

    Rieh, Soo Young; Xie, Hong

    2001-01-01

    Reports on patterns and sequences of query reformulation based on query logs from a Web search engine. Results show that while most query reformulation involves content changes, about 15% of reformulation is related to format modifications. Six patterns of query reformulation emerged as a result of sequence analysis: specified, parallel,…

  16. Visual Signals Vertically Extend the Perceptual Span in Searching a Text: A Gaze-Contingent Window Study

    ERIC Educational Resources Information Center

    Cauchard, Fabrice; Eyrolle, Helene; Cellier, Jean-Marie; Hyona, Jukka

    2010-01-01

    This study investigated the effect of visual signals on perceptual span in text search and the kinds of signal information that facilitate the search. Participants were asked to find answers to specific questions in chapter-length texts in either a normal or a window condition, where the text disappeared beyond a vertical 3 degrees gaze-contingent…

  17. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  18. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  19. The importance of being expert: top-down attentional control in visual search with photographs.

    PubMed

    Hershler, Orit; Hochstein, Shaul

    2009-10-01

    Two observers looking at the same picture may not see the same thing. To avoid sensory overload, visual information is actively selected for further processing by bottom-up processes, originating within the visual image, and top-down processes, reflecting the motivation and past experiences of the observer. The latter processes could grant categories of significance to the observer a permanent attentional advantage. Nevertheless, evidence for a generalized top-down advantage for specific categories has been limited. In this study, bird and car experts searched for face, car, or bird photographs in a heterogeneous display of photographs of real objects. Bottom-up influences were ruled out by presenting both groups of experts with identical displays. Faces and targets of expertise had a clear advantage over novice targets, indicating a permanent top-down preference for favored categories. A novel type of analysis of reaction times over the visual field suggests that the advantage for expert objects is achieved by broader detection windows, allowing observers to scan greater parts of the visual field for the presence of favored targets during each fixation. PMID:19801608

  20. Distinct, but top-down modulable color and positional priming mechanisms in visual pop-out search.

    PubMed

    Geyer, Thomas; Müller, Hermann J

    2009-03-01

    Three experiments examined reaction time (RT) performance in visual pop-out search. Search displays comprised of one color target and two distractors which were presented at 24 possible locations on a circular ellipse. Experiment 1 showed that re-presentation of the target at a previous target location led to expedited RTs, whereas presentation of the target at a distractor location led to slowed RTs (relative to target presentation at a previous empty location). RTs were also faster when the color of the target was the same across consecutive trials, relative to a change of the target's color. This color priming was independent of the positional priming. Experiment 2 revealed larger positional facilitation, relative to Experiment 1, when position repetitions occurred more likely than chance level; analogously, Experiment 3 revealed stronger color priming effects when target color repetitions were more likely. These position and color manipulations did not change the pattern of color (Experiment 2) and positional priming effects (Experiment 3). While these results support the independency of color and positional priming effects (e.g., Maljkovic and Nakayama in Percept Psychophys 58:977-991, 1996), they also show that these (largely 'automatic') effects are top-down modulable when target position and color are predictable (e.g., Müller et al. in Vis Cogn 11:577-602, 2004). PMID:19082623

  1. Examining wide-arc digital breast tomosynthesis: optimization using a visual-search model observer

    NASA Astrophysics Data System (ADS)

    Das, Mini; Liang, Zhihua; Gifford, Howard C.

    2015-03-01

    Mathematical model observers are expected to assist in preclinical optimization of image acquisition and reconstruction parameters. A clinically realistic and robust model observer platform could help in multiparameter optimizations without requiring frequent human-observer validations. We are developing search-capable visual-search (VS) model observers with this potential. In this work, we present initial results on optimization of DBT scan angle and the number of projection views for low-contrast mass detection. Comparison with human-observer results shows very good agreement. These results point towards the benefits of using relatively wider arcs and low projection angles per arc degree for improved mass detection. These results are particularly interesting considering that the FDA-approved DBT systems like Hologic Selenia Dimensions uses a narrow (15-degree) acquisition arc and one projection per arc degree.

  2. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  3. Paying Attention: Being a Naturalist and Searching for Patterns.

    ERIC Educational Resources Information Center

    Weisberg, Saul

    1996-01-01

    Discusses the importance of recognizing patterns in nature to help understand the interactions of living and non-living things. Cautions the student not to lose sight of the details when studying the big picture. Encourages development of the ability to identify local species. Suggest two activities to strengthen observation skills and to help in…

  4. Horizontal-vertical structure in the visual comparison of rigidly transformed patterns.

    PubMed

    Kahn, J I; Foster, D H

    1986-11-01

    Visual recognition of patterns reflected or rotated through 180 degrees (point-inverted) depends critically on their positional symmetry and separation in the field. A possible explanatory scheme suggested a description of internal pattern representation structures and simple internal operations that naturally involved a horizontal-vertical reference system. Predictions of the scheme were tested here in three experiments. Subjects made same-different judgments on pairs of random-dot patterns briefly presented in various arrangements and related by reflection, point-inversion, or identity transformation, or paired at random. Experiment 1 tested reflected patterns and verified the importance of orientation of the reflection axis relative to display-configuration axis. Experiment 2 demonstrated an oblique effect of configuration on performance with reflected patterns, but not with identical or point-inverted patterns. Experiment 3 demonstrated a vertical shift effect of configuration on performance with point-inverted patterns, but not with identical or reflected patterns. We concluded that in same-different pattern comparisons, a horizontal-vertical reference system appears fundamental in determining the nature of and operations upon internal pattern representations. PMID:2946799

  5. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  6. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    NASA Astrophysics Data System (ADS)

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-01-01

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  7. Visual pattern discrimination by population retinal ganglion cells' activities during natural movie stimulation.

    PubMed

    Zhang, Ying-Ying; Wang, Ru-Bin; Pan, Xiao-Chuan; Gong, Hai-Qing; Liang, Pei-Ji

    2014-02-01

    In the visual system, neurons often fire in synchrony, and it is believed that synchronous activities of group neurons are more efficient than single cell response in transmitting neural signals to down-stream neurons. However, whether dynamic natural stimuli are encoded by dynamic spatiotemporal firing patterns of synchronous group neurons still needs to be investigated. In this paper we recorded the activities of population ganglion cells in bullfrog retina in response to time-varying natural images (natural scene movie) using multi-electrode arrays. In response to some different brief section pairs of the movie, synchronous groups of retinal ganglion cells (RGCs) fired with similar but different spike events. We attempted to discriminate the movie sections based on temporal firing patterns of single cells and spatiotemporal firing patterns of the synchronous groups of RGCs characterized by a measurement of subsequence distribution discrepancy. The discrimination performance was assessed by a classification method based on Support Vector Machines. Our results show that different movie sections of the natural movie elicited reliable dynamic spatiotemporal activity patterns of the synchronous RGCs, which are more efficient in discriminating different movie sections than the temporal patterns of the single cells' spike events. These results suggest that, during natural vision, the down-stream neurons may decode the visual information from the dynamic spatiotemporal patterns of the synchronous group of RGCs' activities. PMID:24465283

  8. Landmark Based Shape Analysis for Cerebellar Ataxia Classification and Cerebellar Atrophy Pattern Visualization

    PubMed Central

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-01-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules. PMID:27303111

  9. Landmark based shape analysis for cerebellar ataxia classification and cerebellar atrophy pattern visualization

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules.

  10. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  11. Can intertrial priming account for the similarity effect in visual search?

    PubMed

    Becker, Stefanie I; Ansorge, Ulrich; Horstmann, Gernot

    2009-07-01

    In a visual search task, a salient distractor often elongates response times (RTs) even when it is task-irrelevant. These distraction costs are larger when the irrelevant distractor is similar than when dissimilar to the target. In the present study, we tested whether this similarity effect is mostly due to more frequent oculomotor capture by target-similar versus target-dissimilar distractors (contingent capture hypothesis), or to elongated dwell times on target-similar versus dissimilar distractors (attentional disengagement hypothesis), by measuring the eye movements of the observers during visual search. The results showed that similar distractors were both selected more frequently, and produced longer dwell times than dissimilar distractors. However, attentional capture contributed more to the similarity effect than disengagement. The results of a second experiment showed that stronger capture by similar than dissimilar distractors in part reflected intertrial priming effects: distractors which had the same colour as the target on the previous trial were selected more frequently than distractors with a different colour. These priming effects were however too small to account fully for the similarity effect. More importantly, the results indicated that allegedly stimulus-driven intertrial priming effects and allegedly top-down controlled similarity effects may be mediated by the same underlying mechanism. PMID:19358862

  12. Differential roles of the fan-shaped body and the ellipsoid body in Drosophila visual pattern memory.

    PubMed

    Pan, Yufeng; Zhou, Yanqiong; Guo, Chao; Gong, Haiyun; Gong, Zhefeng; Liu, Li

    2009-05-01

    The central complex is a prominent structure in the Drosophila brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for Drosophila visual pattern memory. However, little is known about the role of other components of the central complex for visual pattern memory. Here we show that a small set of neurons in the ellipsoid body, which is another substructure of the central complex and connected to the fan-shaped body, is also required for visual pattern memory. Localized expression of rutabaga adenylyl cyclase in either the fan-shaped body or the ellipsoid body is sufficient to rescue the memory defect of the rut(2080) mutant. We then performed RNA interference of rutabaga in either structure and found that they both were required for visual pattern memory. Additionally, we tested the above rescued flies under several visual pattern parameters, such as size, contour orientation, and vertical compactness, and revealed differential roles of the fan-shaped body and the ellipsoid body for visual pattern memory. Our study defines a complex neural circuit in the central complex for Drosophila visual pattern memory. PMID:19389914

  13. A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search

    SciTech Connect

    Hart, W.E.

    1999-04-22

    The authors present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R{sup n}: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. They show that EPSAs can be cast as stochastic pattern search methods, and they use this observation to prove that EpSAs have a probabilistic weak stationary point convergence theory. This work provides the first convergence analysis for a class of evolutionary algorithms that guarantees convergence almost surely to a stationary point of a nonconvex objective function.

  14. Gene Expression Browser: large-scale and cross-experiment microarray data integration, management, search & visualization

    PubMed Central

    2010-01-01

    Background In the last decade, a large amount of microarray gene expression data has been accumulated in public repositories. Integrating and analyzing high-throughput gene expression data have become key activities for exploring gene functions, gene networks and biological pathways. Effectively utilizing these invaluable microarray data remains challenging due to a lack of powerful tools to integrate large-scale gene-expression information across diverse experiments and to search and visualize a large number of gene-expression data points. Results Gene Expression Browser is a microarray data integration, management and processing system with web-based search and visualization functions. An innovative method has been developed to define a treatment over a control for every microarray experiment to standardize and make microarray data from different experiments homogeneous. In the browser, data are pre-processed offline and the resulting data points are visualized online with a 2-layer dynamic web display. Users can view all treatments over control that affect the expression of a selected gene via Gene View, and view all genes that change in a selected treatment over control via treatment over control View. Users can also check the changes of expression profiles of a set of either the treatments over control or genes via Slide View. In addition, the relationships between genes and treatments over control are computed according to gene expression ratio and are shown as co-responsive genes and co-regulation treatments over control. Conclusion Gene Expression Browser is composed of a set of software tools, including a data extraction tool, a microarray data-management system, a data-annotation tool, a microarray data-processing pipeline, and a data search & visualization tool. The browser is deployed as a free public web service (http://www.ExpressionBrowser.com) that integrates 301 ATH1 gene microarray experiments from public data repositories (viz. the Gene

  15. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis. PMID:22656866

  16. Pattern drilling exploration: Optimum pattern types and hole spacings when searching for elliptical shaped targets

    USGS Publications Warehouse

    Drew, L.J.

    1979-01-01

    In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.

  17. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Astrophysics Data System (ADS)

    Gaugler, R. E.; Russell, L. M.

    1983-03-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  18. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Astrophysics Data System (ADS)

    Gaugler, R. E.; Russell, L. M.

    1984-01-01

    Various flow visualization techniques were used to define the seondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious. Previously announced in STAR as N83-14435

  19. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1983-01-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  20. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search

    PubMed Central

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  1. Patterns on serpentine shapes elicit visual attention in marmosets (Callithrix jacchus).

    PubMed

    Wombolt, Jessica R; Caine, Nancy G

    2016-09-01

    Given the prevalence of threatening snakes in the evolutionary history, and modern-day environments of human and nonhuman primates, sensory, and perceptual abilities that allow for quick detection of, and appropriate response to snakes are likely to have evolved. Many studies have demonstrated that primates recognize snakes faster than other stimuli, and it is suggested that the unique serpentine shape is responsible for its quick detection. However, there are many nonthreatening serpentine shapes in the environment (e.g., vines) that are not threatening; therefore, other cues must be used to distinguish threatening from benign serpentine objects. In two experiments, we systematically evaluated how common marmosets (Callithrix jacchus) visually attend to specific snake-like features. In the first experiment, we examined if skin pattern is a cue that elicits increased visual inspection of serpentine shapes by measuring the amount of time the marmosets looked into a blind before, during, and after presentation of clay models with and without patterns. The marmosets spent the most time looking at the objects, both serpentine and triangle, that were etched with scales, suggesting that something may be uniquely salient about scales in evoking attention. In contrast, they showed relatively little interest in the unpatterned serpentine and control (a triangle) stimuli. In experiment 2, we replicated and extended the results of experiment 1 by adding additional stimulus conditions. We found that patterns on a serpentine shape generated more inspection than those same patterns on a triangle shape. We were unable to confirm that a scaled pattern is unique in its ability to elicit visual interest; the scaled models elicited similar looking times as line and star patterns. Our data provide a foundation for future research to examine how snakes are detected and identified by primates. Am. J. Primatol. 78:928-936, 2016. © 2016 Wiley Periodicals, Inc. PMID:27225979

  2. Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.

    PubMed

    Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen

    2016-01-01

    Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. PMID:26049037

  3. Visual Circuit Development Requires Patterned Activity Mediated by Retinal Acetylcholine Receptors

    PubMed Central

    Burbridge, Timothy J.; Xu, Hong-Ping; Ackman, James B.; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z. Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C.

    2014-01-01

    SUMMARY The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial largely due to the inherent difficulty recording neural activity in early development. Here, we describe novel genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a de-coupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916

  4. Visual Scanning Patterns during the Dimensional Change Card Sorting Task in Children with Autism Spectrum Disorder

    PubMed Central

    Yi, Li; Liu, Yubing; Li, Yunyi; Fan, Yuebo; Huang, Dan; Gao, Dingguo

    2012-01-01

    Impaired cognitive flexibility in children with autism spectrum disorder (ASD) has been reported in previous literature. The present study explored ASD children's visual scanning patterns during the Dimensional Change Card Sorting (DCCS) task using eye-tracking technique. ASD and typical developing (TD) children completed the standardized DCCS procedure on the computer while their eye movements were tracked. Behavioral results confirmed previous findings on ASD children's deficits in executive function. ASD children's visual scanning patterns also showed some specific underlying processes in the DCCS task compared to TD children. For example, ASD children looked shorter at the correct card in the postswitch phase and spent longer time at blank areas than TD children did. ASD children did not show a bias to the color dimension as TD children did. The correlations between the behavioral performance and eye moments were also discussed. PMID:23050145

  5. Visual circuit development requires patterned activity mediated by retinal acetylcholine receptors.

    PubMed

    Burbridge, Timothy J; Xu, Hong-Ping; Ackman, James B; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C

    2014-12-01

    The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial, largely due to the inherent difficulty recording neural activity in early development. Here, we describe genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a decoupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916

  6. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns.

    PubMed

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects. PMID:25169944

  7. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-08-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  8. Giant honeybees ( Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees ( Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  9. Exploratory Data Analysis Using a Dedicated Visualization App: Looking for Patterns in Volcanic Activity

    NASA Astrophysics Data System (ADS)

    van Manen, S. M.; Chen, S.

    2015-12-01

    Here we present an App designed to visualize and identify patterns in volcanic activity during the last ten years. It visualizes VEI (volcanic explosivity index) levels, population size, frequency of activity, and geographic region, and is designed to address the issue of oversampling of data. Often times, it is difficult to access a large set of data that can be scattered at first glance and hard to digest without visual aid. This App serves as a model that solves this issue and can be applied to other data. To enable users to quickly assess the large data set it breaks down the apparently chaotic abundance of information into categories and graphic indicators: color is used to indicate the VEI level, size for population size within 5 km of a volcano, line thickness for frequency of activity, and a grid to pinpoint a volcano's latitude. The categories and layers within them can be turned on and off by the user, enabling them to scroll through and compare different layers of data. By visualising the data this way, patterns began to emerge. For example, certain geographic regions had more explosive eruptions than others. Another good example was that low frequency larger impact volcanic eruptions occurred more irregularly than smaller impact volcanic eruptions, which had a more stable frequencies. Although these findings are not unexpected, the easy to navigate App does showcase the potential of data visualization for the rapid appraisal of complex and abundant multi-dimensional geoscience data.

  10. Object-scene relationships vary the magnitude of target prevalence effects in visual search.

    PubMed

    Beanland, Vanessa; Le, Rebecca K; Byrne, Jamie E M

    2016-06-01

    Efficiency of visual search in real-world tasks is affected by several factors, including scene context and target prevalence. Observers are more efficient at detecting target objects in congruent locations, and less efficient at detecting rare targets. Although target prevalence and placement often covary, previous research has investigated context and prevalence effects independently. We conducted 2 experiments to explore the potential interaction between scene context and target prevalence effects. In Experiment 1, we varied target prevalence (high, low) and context (congruent, incongruent), and, for congruent contexts, target location (typical, atypical). Experiment 2 focused on the interaction between target prevalence (high, low) and location (typical, atypical) for congruent contexts, and recorded observers' eye movements to examine search strategies. Observers were poorer at detecting low versus high prevalence targets; however, prevalence effects were significantly reduced for targets in typical, congruent locations compared with atypical or incongruent locations. Eye movement analyses in Experiment 2 revealed this was related to observers dwelling disproportionately on the most typical target locations within a scene. This suggests that a byproduct of contextual guidance within scenes is that placing targets in unexpected or atypical locations will further increase miss rates for uncommon targets, which has implications for real-world situations in which rare targets appear in unexpected locations. Although prevalence effects are robust, our results suggest potential for mitigating the negative consequences of low prevalence through targeted training that teaches observers where to focus their search. (PsycINFO Database Record PMID:26618623

  11. Visual search is postponed during the period of the AB: An event-related potential study.

    PubMed

    Lagroix, Hayley E P; Grubert, Anna; Spalek, Thomas M; Di Lollo, Vincent; Eimer, Martin

    2015-08-01

    In the phenomenon known as the attentional blink (AB), perception of the second of two rapidly sequential targets (T2) is impaired when presented shortly after the first (T1). Studies in which T2 consisted of a pop-out search array provided evidence suggesting that visual search is postponed during the AB. In the present work, we used behavioral and electrophysiological measures to test this postponement hypothesis. The behavioral measure was reaction time (RT) to T2; the electrophysiological measure was the onset latency of an ERP index of attentional selection, known as the N2pc. Consistent with the postponement hypothesis, both measures were delayed during the AB. The delay in N2pc was substantially shorter than that in RT, pointing to multiple sources of delay in the chain of processing events, as distinct from the single source postulated in current theories of the AB. Finally, the finding that the N2pc was delayed during the AB strongly suggests that attention is involved in the processing of pop-out search arrays. PMID:25871502

  12. SVM-based visual-search model observers for PET tumor detection

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.; Sen, Anando; Azencott, Robert

    2015-03-01

    Many search-capable model observers follow task paradigms that specify clinically unrealistic prior knowledge about the anatomical backgrounds in study images. Visual-search (VS) observers, which implement distinct, feature-based candidate search and analysis stages, may provide a means of avoiding such paradigms. However, VS observers that conduct single-feature analysis have not been reliable in the absence of any background information. We investigated whether a VS observer based on multifeature analysis can overcome this background dependence. The testbed was a localization ROC (LROC) study with simulated whole-body PET images. Four target-dependent morphological features were defined in terms of 2D cross-correlations involving a known tumor profile and the test image. The feature values at the candidate locations in a set of training images were fed to a support-vector machine (SVM) to compute a linear discriminant that classified locations as tumor-present or tumor-absent. The LROC performance of this SVM-based VS observer was compared against the performances of human observers and a pair of existing model observers.

  13. Linking pattern completion in the hippocampus to predictive coding in visual cortex.

    PubMed

    Hindy, Nicholas C; Ng, Felicia Y; Turk-Browne, Nicholas B

    2016-05-01

    Models of predictive coding frame perception as a generative process in which expectations constrain sensory representations. These models account for expectations about how a stimulus will move or change from moment to moment, but do not address expectations about what other, distinct stimuli are likely to appear based on prior experience. We show that such memory-based expectations in human visual cortex are related to the hippocampal mechanism of pattern completion. PMID:27065363

  14. Pattern reversal visual evoked potentials in Japanese patients with multiple sclerosis.

    PubMed Central

    Shibasaki, H; Kuroiwa, Y

    1982-01-01

    Forty-seven Japanese patients with multiple sclerosis, 29 probable (clinically definite) and 18 possible, were studied by black-and-white checkerboard pattern reversal visual evoked potential and were compared with a control group of 20 healthy young adults. The major positive peak (P100) was found to be abnormal in 70% of all cases, 90% of probable cases and 39% of possible cases. P100 was delayed in 38% of all cases and was absent in 23% of all cases. None of the eyes showing a flat pattern response was in the acute stage of optic neuritis. The percentage of cases with no response (23% of all cases) was greater than any of the previously reported series from Western countries, substantiating the previously reported clinical features of oriental multiple sclerosis. The pattern response was absent only when testing eyes with severe visual impairment, whereas delayed latency of P100 was seen regardless of the severity of visual impairment, suggesting the usefulness of P100 latency for detecting subclinical optic nerve lesions. PMID:7161609

  15. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  16. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  17. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    SciTech Connect

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-10-09

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASAWorld Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the hot spot areas of research. Most importantly, our methods demonstrate an easy way to geovisualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  18. Differential Expression Patterns of occ1-Related Genes in Adult Monkey Visual Cortex

    PubMed Central

    Takahata, Toru; Komatsu, Yusuke; Watakabe, Akiya; Hashikawa, Tsutomu; Tochitani, Shiro

    2009-01-01

    We have previously revealed that occ1 is preferentially expressed in the primary visual area (V1) of the monkey neocortex. In our attempt to identify more area-selective genes in the macaque neocortex, we found that testican-1, an occ1-related gene, and its family members also exhibit characteristic expression patterns along the visual pathway. The expression levels of testican-1 and testican-2 mRNAs as well as that of occ1 mRNA start of high in V1, progressively decrease along the ventral visual pathway, and end of low in the temporal areas. Complementary to them, the neuronal expression of SPARC mRNA is abundant in the association areas and scarce in V1. Whereas occ1, testican-1, and testican-2 mRNAs are preferentially distributed in thalamorecipient layers including “blobs,” SPARC mRNA expression avoids these layers. Neither SC1 nor testican-3 mRNA expression is selective to particular areas, but SC1 mRNA is abundantly observed in blobs. The expressions of occ1, testican-1, testican-2, and SC1 mRNA were downregulated after monocular tetrodotoxin injection. These results resonate with previous works on chemical and functional gradients along the primate occipitotemporal visual pathway and raise the possibility that these gradients and functional architecture may be related to the visual activity–dependent expression of these extracellular matrix glycoproteins. PMID:19073625

  19. Irrelevant singletons in visual search do not capture attention but can produce nonspatial filtering costs.

    PubMed

    Wykowska, Agnieszka; Schubö, Anna

    2011-03-01

    It is not clear how salient distractors affect visual processing. The debate concerning the issue of whether irrelevant salient items capture spatial attention [e.g., Theeuwes, J., Atchley, P., & Kramer, A. F. On the time course of top-down and bottom-up control of visual attention. In S. Monsell & J. Driver (Eds.), Attention and performance XVIII: Control of cognitive performance (pp. 105-124). Cambridge, MA: MIT Press, 2000] or produce only nonspatial interference in the form of, for example, filtering costs [Folk, Ch. L., & Remington, R. Top-down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445-465, 2006] has not yet been settled. The present ERP study examined deployment of attention in visual search displays that contained an additional irrelevant singleton. Display-locked N2pc showed that attention was allocated to the target and not to the irrelevant singleton. However, the onset of the N2pc to the target was delayed when the irrelevant singleton was presented in the opposite hemifield relative to the same hemifield. Thus, although attention was successfully focused on the target, the irrelevant singleton produced some interference resulting in a delayed allocation of attention to the target. A subsequent probe discrimination task allowed for locking ERPs to probe onsets and investigating the dynamics of sensory gain control for probes appearing at relevant (target) or irrelevant (singleton distractor) positions. Probe-locked P1 showed sensory gain for probes positioned at the target location but no such effect for irrelevant singletons in the additional singleton condition. Taken together, the present data support the claim that irrelevant singletons do not capture attention. If they produce any interference, it is rather due to nonspatial filtering costs. PMID:19929330

  20. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  1. Responses of neurones in the cat's visual cerebral cortex to relative movement of patterns

    PubMed Central

    Burns, B. Delisle; Gassanov, U.; Webb, A. C.

    1972-01-01

    1. We have investigated the responses of single neurones in the visual cerebral cortex of the unanaesthetized, isolated cat's forebrain to excitation of one retina with patterned light. The responses of twenty-six cells to the relative movement of two patterns in the visual field have been recorded. 2. We used several forms of relative movement for stimulation, but all of them involved a change in the separation of two parallel and straight light-dark edges. 3. Responses to this form of stimulation were compared with the responses of the same cells to simple movement, that is, movement of the same patterns without change of distance between their borders. 4. All cells showed a response to relative movement that differed from their response to simple movement. 5. The time-locked phasic response differed in 54% of the cells tested. Of cells responding in this way, 83% of tests produced an increased phasic response. 6. Relative movement brought about changes in the mean frequency of discharge in 96% of the cells tested. 82% of these cells responded with an increased rate of firing. 7. Movement relative to a coarse background pattern affected more neurones and produced a greater change in their behaviour than did movement relative to a fine-grained pattern. 8. The neurones tested represented the central part of the visual field (0-10°); while all were affected by relative movement, those representing points furthest from the optic axis appeared to be most susceptible (we found no correlation between size of receptive field and distance from the optic axis). PMID:5083167

  2. The guidance of spatial attention during visual search for color combinations and color configurations.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-09-01

    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioral and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of 2 colors or by a specific spatial configuration of these colors. Target displays were preceded by spatially uninformative cue displays that contained items in 1 or both target-defining colors. Experiments 1 and 2 demonstrated that, during search for color combinations, attention is initially allocated independently and in parallel to all objects with target-matching colors, but is then rapidly withdrawn from objects that only have 1 of the 2 target colors. In Experiment 3, targets were defined by a particular spatial configuration of 2 colors, and could be accompanied by nontarget objects with a different configuration of the same colors. Attentional guidance processes were unable to distinguish between these 2 types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features. (PsycINFO Database Record PMID:26962846

  3. Structator: fast index-based search for RNA sequence-structure patterns

    PubMed Central

    2011-01-01

    Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several

  4. Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location.

    PubMed

    Pollmann, Stefan; Eštočinová, Jana; Sommer, Susanne; Chelazzi, Leonardo; Zinke, Wolf

    2016-01-01

    Spatial contextual cueing reflects an incidental form of learning that occurs when spatial distractor configurations are repeated in visual search displays. Recently, it was reported that the efficiency of contextual cueing can be modulated by reward. We replicated this behavioral finding and investigated its neural basis with fMRI. Reward value was associated with repeated displays in a learning session. The effect of reward value on context-guided visual search was assessed in a subsequent fMRI session without reward. Structures known to support explicit reward valuation, such as ventral frontomedial cortex and posterior cingulate cortex, were modulated by incidental reward learning. Contextual cueing, leading to more efficient search, went along with decreased activation in the visual search network. Retrosplenial cortex played a special role in that it showed both a main effect of reward and a reward×configuration interaction and may thereby be a central structure for the reward modulation of context-guided visual search. PMID:26427645

  5. Spatial patterns of visual cortical fast EEG during conditioned reflex in a rhesus monkey.

    PubMed

    Freeman, W J; van Dijk, B W

    1987-10-01

    A preliminary assay was made of the existence of time-space coherence patterns of fast EEG activity in the visual cortex of a Rhesus monkey. The primary intent of the present study was to evaluate the similarities and differences in relation to the olfactory bulb, where such coherences have been described and have been demonstrated to be associated with behaviour. Segments 1.5 s in duration were recorded simultaneously without averaging from 16 to 35 subdural electrodes fixed over the left occipital lobe in an array 3.6 cm X 2.8 cm. Each segment was taken during the delivery of a visual conditioned stimulus (CS) and the performance of a conditioned response (CR) by a well-trained Rhesus monkey. The EEGs appeared chaotic with irregular bursts lasting 75-200 ms, resembling those in the olfactory EEG but with lower peak frequencies. Fourier spectra showed broad distributions of power resembling '1/f noise' with multiple peaks in the range of 20-40 Hz. Time intervals were selected where coherent activity seemed to be present at a number of electrodes. A dominant component waveform that was common to all channels was extracted by principal components analysis (PCA) of each segment. The distribution of the power of this component across the electrodes (the factor loadings) was used to describe the spatial pattern of the coherent cortical activity. Statistical analyses suggested that different patterns could be associated to the CS and the CR, as has been found in the olfactory system. These patterns remained stable over a 6 week recording interval. The patterns can be better discriminated, when the factor loadings of each channel are normalized to zero mean and unit variance, to discard a basic pattern of power distribution, which may reflect anatomical and electrode positioning factors that are related to behavioral information processing by the cortex. The wide spatial distribution of the common patterns found suggests that EEG patterns that manifest differing states

  6. Probability cueing influences miss rate and decision criterion in visual searches

    PubMed Central

    Ishibashi, Kazuya; Kita, Shinichi

    2014-01-01

    In visual search tasks, the ratio of target-present to target-absent trials has an important effect on miss rates. The low prevalence effect indicates that we are more likely to miss a target when it occurs rarely rather than frequently. In this study, we examined whether probability cueing modulates the miss rate and the observer's criterion. The results indicated that probability cueing affects miss rates, the average observer's criterion, and reaction time for target-absent trials. These results clearly demonstrate that probability cueing modulates two parameters (i.e., the decision criterion and the quitting threshold) and produces a low prevalence effect. Taken together, the current study and previous studies suggest that the miss rate is not just affected by global prevalence; it is also affected by probability cueing. PMID:25469223

  7. HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

    2013-12-01

    Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds

  8. The Autism-Spectrum Quotient and Visual Search: Shallow and Deep Autistic Endophenotypes.

    PubMed

    Gregory, B L; Plaisted-Grant, K C

    2016-05-01

    A high Autism-Spectrum Quotient (AQ) score (Baron-Cohen et al. in J Autism Dev Disord 31(1):5-17, 2001) is increasingly used as a proxy in empirical studies of perceptual mechanisms in autism. Several investigations have assessed perception in non-autistic people measured for AQ, claiming the same relationship exists between performance on perceptual tasks in high-AQ individuals as observed in autism. We question whether the similarity in performance by high-AQ individuals and autistics reflects the same underlying perceptual cause in the context of two visual search tasks administered to a large sample of typical individuals assessed for AQ. Our results indicate otherwise and that deploying the AQ as a proxy for autism introduces unsubstantiated assumptions about high-AQ individuals, the endophenotypes they express, and their relationship to Autistic Spectrum Conditions (ASC) individuals. PMID:24077740

  9. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data

    PubMed Central

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  10. Simultaneous tDCS-fMRI Identifies Resting State Networks Correlated with Visual Search Enhancement

    PubMed Central

    Callan, Daniel E.; Falcone, Brian; Wada, Atsushi; Parasuraman, Raja

    2016-01-01

    This study uses simultaneous transcranial direct current stimulation (tDCS) and functional MRI (fMRI) to investigate tDCS modulation of resting state activity and connectivity that underlies enhancement in behavioral performance. The experiment consisted of three sessions within the fMRI scanner in which participants conducted a visual search task: Session 1: Pre-training (no performance feedback), Session 2: Training (performance feedback given), Session 3: Post-training (no performance feedback). Resting state activity was recorded during the last 5 min of each session. During the 2nd session one group of participants underwent 1 mA tDCS stimulation and another underwent sham stimulation over the right posterior parietal cortex. Resting state spontaneous activity, as measured by fractional amplitude of low frequency fluctuations (fALFF), for session 2 showed significant differences between the tDCS stim and sham groups in the precuneus. Resting state functional connectivity from the precuneus to the substantia nigra, a subcortical dopaminergic region, was found to correlate with future improvement in visual search task performance for the stim over the sham group during active stimulation in session 2. The after-effect of stimulation on resting state functional connectivity was measured following a post-training experimental session (session 3). The left cerebellum Lobule VIIa Crus I showed performance related enhancement in resting state functional connectivity for the tDCS stim over the sham group. The ability to determine the relationship that the relative strength of resting state functional connectivity for an individual undergoing tDCS has on future enhancement in behavioral performance has wide ranging implications for neuroergonomic as well as therapeutic, and rehabilitative applications. PMID:27014014

  11. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica

    PubMed Central

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675

  12. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica.

    PubMed

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675

  13. Search and retrieval of plasma wave forms: Structural pattern recognition approach

    NASA Astrophysics Data System (ADS)

    Dormido-Canto, S.; Farias, G.; Vega, J.; Dormido, R.; Sánchez, J.; Duro, N.; Santos, M.; Martin, J. A.; Pajares, G.

    2006-10-01

    Databases for fusion experiments are designed to store several million wave forms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques can allow identification of similar plasma behaviors. Further developments in this area must be focused on four aspects: large databases, feature extraction, similarity function, and search/retrieval efficiency. This article describes an approach for pattern searching within wave forms. The technique is performed in three stages. Firstly, the signals are filtered. Secondly, signals are encoded according to a discrete set of values (code alphabet). Finally, pattern recognition is carried out via string comparisons. The definition of code alphabets enables the description of wave forms as strings, instead of representing the signals in terms of multidimensional data vectors. An alphabet of just five letters can be enough to describe any signal. In this way, signals can be stored as a sequence of characters in a relational database, thereby allowing the use of powerful structured query languages to search for patterns and also ensuring quick data access.

  14. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  15. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    PubMed Central

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  16. Visual search, movement behaviour and boat control during the windward mark rounding in sailing.

    PubMed

    Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Savelsbergh, Geert J P

    2015-01-01

    In search of key-performance predictors in sailing, we examined to what degree visual search, movement behaviour and boat control contribute to skilled performance while rounding the windward mark. To this end, we analysed 62 windward mark roundings sailed without opponents and 40 windward mark roundings sailed with opponents while competing in small regattas. Across conditions, results revealed that better performances were related to gazing more to the tangent point during the actual rounding. More specifically, in the condition without opponents, skilled performance was associated with gazing more outside the dinghy during the actual rounding, while in the condition with opponents, superior performance was related to gazing less outside the dinghy. With respect to movement behaviour, superior performance was associated with the release of the trimming lines close to rounding the mark. In addition, better performances were related to approaching the mark with little heel, yet heeling the boat more to the windward side when being close to the mark. Potential implications for practice are suggested for each phase of the windward mark rounding. PMID:25105956

  17. Long-term priming of visual search prevails against the passage of time and counteracting instructions.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-08-01

    Studies on intertrial priming have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change-so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of 2 possible target colors is more frequent during an experiment block, this results in a prolonged and persistent facilitation for the color that was biased, long after the frequency bias is gone-so-called long-term priming. In this study, we explore the robustness of such long-term priming. In Experiment 1, participants were fully informed of the bias and instructed to prioritize the other unbiased color. Despite these instructions, long-term priming of the biased color persisted in this block, suggesting that guidance by long-term priming is an implicit effect. In Experiment 2, long-term priming was built up in 1 experimental session and was then assessed in a second session a week later. Long-term priming persisted across this week, emphasizing that long-term priming is truly a phenomenon of long-term memory. The results support the view that priming results from the automatic and implicit retrieval of memory traces of past trials. (PsycINFO Database Record PMID:26866654

  18. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes

    PubMed Central

    2012-01-01

    Background Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses) do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. Methods In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group). Their results were compared to a Control group (12 older drivers) who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. Results After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot). In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes). Conclusions These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs. PMID:22385499

  19. Visualization of flow patterns induced by an impinging jet issuing from a circular planform

    NASA Astrophysics Data System (ADS)

    Saripalli, K. R.

    1983-12-01

    A four-jet impingement flow with application to high-performance VTOL aircraft is investigated. Flow visualization studies were conducted with water as the working medium. Photographs of different cross sections of the flow are presented to describe the properties of the fountain upwash and the stagnation-line patterns. The visualization technique involves the introduction of fluorescein-sodium, a fluorescent dye, into the jet flow and illumination by a sheet of light obtained by spreading a laser beam. Streak-line photographs were also taken using air bubbles as tracer particles. The strength and orientation of the fountain(s) were observed for different heights of the nozzle configuration above the ground and inclination angles of the forward nozzles.

  20. Pattern motion selectivity of spiking outputs and local field potentials in macaque visual cortex.

    PubMed

    Khawaja, Farhan A; Tsui, James M G; Pack, Christopher C

    2009-10-28

    The dorsal pathway of the primate visual cortex is involved in the processing of motion signals that are useful for perception and behavior. Along this pathway, motion information is first measured by the primary visual cortex (V1), which sends specialized projections to extrastriate regions such as the middle temporal area (MT). Previous work with plaid stimuli has shown that most V1 neurons respond to the individual components of moving stimuli, whereas some MT neurons are capable of estimating the global motion of the pattern. In this work, we show that the majority of neurons in the medial superior temporal area (MST), which receives input from MT, have this pattern-selective property. Interestingly, the local field potentials (LFPs) measured simultaneously with the spikes often exhibit properties similar to that of the presumptive feedforward input to each area: in the high-gamma frequency band, the LFPs in MST are as component selective as the spiking outputs of MT, and MT LFPs have plaid responses that are similar to the spiking outputs of V1. In the lower LFP frequency bands (beta and low gamma), component selectivity is very common, and pattern selectivity is almost entirely absent in both MT and MST. Together, these results suggest a surprisingly strong link between the sensory tuning of cortical LFPs and afferent inputs, with important implications for the interpretation of imaging studies and for models of cortical function. PMID:19864582

  1. The role of pattern recognition in creative problem solving: a case study in search of new mathematics for biology.

    PubMed

    Hong, Felix T

    2013-09-01

    Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach

  2. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  3. Differential Roles of the Fan-Shaped Body and the Ellipsoid Body in "Drosophila" Visual Pattern Memory

    ERIC Educational Resources Information Center

    Pan, Yufeng; Zhou, Yanqiong; Guo, Chao; Gong, Haiyun; Gong, Zhefeng; Liu, Li

    2009-01-01

    The central complex is a prominent structure in the "Drosophila" brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for "Drosophila" visual pattern memory. However, little is known…

  4. Children with Fetal Alcohol Syndrome and Fetal Alcohol Effects: Patterns of Performance on IQ and Visual Motor Ability.

    ERIC Educational Resources Information Center

    Kopera-Frye, Karen; Zielinski, Sharon

    This study explored relationships between intelligence and visual motor ability and patterns of impairment of visual motor ability in children prenatally affected by alcohol. Fourteen children (mean age 8.2 years) diagnosed with fetal alcohol syndrome (FAS) and 50 children with possible fetal alcohol effects (FAE) were assessed with the Bender…

  5. Model-based analysis of pattern motion processing in mouse primary visual cortex

    PubMed Central

    Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID

  6. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) in the early stages of Alzheimer’s disease

    PubMed Central

    Lubiński, Wojciech; Potemkowski, Andrzej; Honczarenko, Krystyna

    2010-01-01

    Alzheimer’s disease (AD) is one of the most common causes of dementia in the world. Patients with AD frequently complain of vision disturbances that do not manifest as changes in routine ophthalmological examination findings. The main causes of these disturbances are neuropathological changes in the visual cortex, although abnormalities in the retina and optic nerve cannot be excluded. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) tests are commonly used in ophthalmology to estimate bioelectrical function of the retina and optic nerve. The aim of this study was to determine whether retinal and optic nerve function, measured by PERG and PVEP tests, is changed in individuals in the early stages of AD with normal routine ophthalmological examination results. Standard PERG and PVEP tests were performed in 30 eyes of 30 patients with the early stages of AD. The results were compared to 30 eyes of 30 normal healthy controls. PERG and PVEP tests were recorded in accordance with the International Society for Clinical Electrophysiology of Vision (ISCEV) standards. Additionally, neural conduction was measured using retinocortical time (RCT)—the difference between P100-wave latency in PVEP and P50-wave implicit time in PERG. In PERG test, PVEP test, and RCT, statistically significant changes were detected. In PERG examination, increased implicit time of P50-wave (P < 0.03) and amplitudes reductions in P50- and N95-waves (P < 0.0001) were observed. In PVEP examination, increased latency of P100-wave (P < 0.0001) was found. A significant increase in RCT (P < 0.0001) was observed. The most prevalent features were amplitude reduction in N95-wave and increased latency of P100-wave which were seen in 56.7% (17/30) of the AD eyes. In patients with the early stages of AD and normal routine ophthalmological examination results, dysfunction of the retinal ganglion cells as well as of the optic nerve is present, as detected by PERG and PVEP

  7. Scavengers on the Move: Behavioural Changes in Foraging Search Patterns during the Annual Cycle

    PubMed Central

    López-López, Pascual; Benavent-Corai, José; García-Ripollés, Clara; Urios, Vicente

    2013-01-01

    Background Optimal foraging theory predicts that animals will tend to maximize foraging success by optimizing search strategies. However, how organisms detect sparsely distributed food resources remains an open question. When targets are sparse and unpredictably distributed, a Lévy strategy should maximize foraging success. By contrast, when resources are abundant and regularly distributed, simple Brownian random movement should be sufficient. Although very different groups of organisms exhibit Lévy motion, the shift from a Lévy to a Brownian search strategy has been suggested to depend on internal and external factors such as sex, prey density, or environmental context. However, animal response at the individual level has received little attention. Methodology/Principal Findings We used GPS satellite-telemetry data of Egyptian vultures Neophron percnopterus to examine movement patterns at the individual level during consecutive years, with particular interest in the variations in foraging search patterns during the different periods of the annual cycle (i.e. breeding vs. non-breeding). Our results show that vultures followed a Brownian search strategy in their wintering sojourn in Africa, whereas they exhibited a more complex foraging search pattern at breeding grounds in Europe, including Lévy motion. Interestingly, our results showed that individuals shifted between search strategies within the same period of the annual cycle in successive years. Conclusions/Significance Results could be primarily explained by the different environmental conditions in which foraging activities occur. However, the high degree of behavioural flexibility exhibited during the breeding period in contrast to the non-breeding period is challenging, suggesting that not only environmental conditions explain individuals' behaviour but also individuals' cognitive abilities (e.g., memory effects) could play an important role. Our results support the growing awareness about the role of

  8. A Clash of Bottom-Up and Top-Down Processes in Visual Search: The Reversed Letter Effect Revisited

    ERIC Educational Resources Information Center

    Zhaoping, Li; Frith, Uta

    2011-01-01

    It is harder to find the letter "N" among its mirror reversals than vice versa, an inconvenient finding for bottom-up saliency accounts based on primary visual cortex (V1) mechanisms. However, in line with this account, we found that in dense search arrays, gaze first landed on either target equally fast. Remarkably, after first landing, gaze…

  9. Practice Makes Improvement: How Adults with Autism Out-Perform Others in a Naturalistic Visual Search Task

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Martin, Jolie M.; Minshew, Nancy J.; Behrmann, Marlene

    2013-01-01

    People with autism spectrum disorder (ASD) often exhibit superior performance in visual search compared to others. However, most studies demonstrating this advantage have employed simple, uncluttered images with fully visible targets. We compare the performance of high-functioning adults with ASD and matched controls on a naturalistic luggage…

  10. How Prior Knowledge and Colour Contrast Interfere Visual Search Processes in Novice Learners: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Sonmez, Duygu; Altun, Arif; Mazman, Sacide Guzin

    2012-01-01

    This study investigates how prior content knowledge and prior exposure to microscope slides on the phases of mitosis effect students' visual search strategies and their ability to differentiate cells that are going through any phases of mitosis. Two different sets of microscope slide views were used for this purpose; with high and low colour…

  11. Effects of light touch on postural sway and visual search accuracy: A test of functional integration and resource competition hypotheses.

    PubMed

    Chen, Fu-Chen; Chen, Hsin-Lin; Tu, Jui-Hung; Tsai, Chia-Liang

    2015-09-01

    People often multi-task in their daily life. However, the mechanisms for the interaction between simultaneous postural and non-postural tasks have been controversial over the years. The present study investigated the effects of light digital touch on both postural sway and visual search accuracy for the purpose of assessing two hypotheses (functional integration and resource competition), which may explain the interaction between postural sway and the performance of a non-postural task. Participants (n=42, 20 male and 22 female) were asked to inspect a blank sheet of paper or visually search for target letters in a text block while a fingertip was in light contact with a stable surface (light touch, LT), or with both arms hanging at the sides of the body (no touch, NT). The results showed significant main effects of LT on reducing the magnitude of postural sway as well as enhancing visual search accuracy compared with the NT condition. The findings support the hypothesis of function integration, demonstrating that the modulation of postural sway can be modulated to improve the performance of a visual search task. PMID:26112777

  12. Cross-Trial Priming of Element Positions in Visual Pop-Out Search is Dependent on Stimulus Arrangement

    ERIC Educational Resources Information Center

    Geyer, Thomas; Muller, Hermann J.; Krummenacher, Joseph

    2007-01-01

    Two experiments examined cross-trial positional priming (V. Maljkovic & K. Nakayama, 1994, 1996, 2000) in visual pop-out search. Experiment 1 used regularly arranged target and distractor displays, as in previous studies. Reaction times were expedited when the target appeared at a previous target location (facilitation relative to neutral…

  13. Age-Related Occipito-Temporal Hypoactivation during Visual Search: Relationships between mN2pc Sources and Performance

    ERIC Educational Resources Information Center

    Lorenzo-Lopez, L.; Gutierrez, R.; Moratti, S.; Maestu, F.; Cadaveira, F.; Amenedo, E.

    2011-01-01

    Recently, an event-related potential (ERP) study (Lorenzo-Lopez et al., 2008) provided evidence that normal aging significantly delays and attenuates the electrophysiological correlate of the allocation of visuospatial attention (N2pc component) during a feature-detection visual search task. To further explore the effects of normal aging on the…

  14. The contribution of coping-related variables and heart rate variability to visual search performance under pressure.

    PubMed

    Laborde, Sylvain; Lautenbach, Franziska; Allen, Mark S

    2015-02-01

    Visual search performance under pressure is explored within the predictions of the neurovisceral integration model. The experimental aims of this study were: 1) to investigate the contribution of coping-related variables to baseline, task, and reactivity (task-baseline) high-frequency heart rate variability (HF-HRV), and 2) to investigate the contribution of coping-related variables and HF-HRV to visual search performance under pressure. Participants (n=96) completed self-report measures of coping-related variables (emotional intelligence, coping style, perceived stress intensity, perceived control of stress, coping effectiveness, challenge and threat, and attention strategy) and HF-HRV was measured during a visual search task under pressure. The data show that baseline HF-HRV was predicted by a trait coping-related variable, task HF-HRV was predicted by a combination of trait and state coping-related variables, and reactivity HF-HRV was predicted by a state coping-related variable. Visual search performance was predicted by coping-related variables but not by HF-HRV. PMID:25481358

  15. Spatial Heterogeneity and Imperfect Mixing in Chemical Reactions: Visualization of Density-Driven Pattern Formation

    DOE PAGESBeta

    Sobel, Sabrina G.; Hastings, Harold M.; Testa, Matthew

    2009-01-01

    Imore » mperfect mixing is a concern in industrial processes, everyday processes (mixing paint, bread machines), and in understanding salt water-fresh water mixing in ecosystems. The effects of imperfect mixing become evident in the unstirred ferroin-catalyzed Belousov-Zhabotinsky reaction, the prototype for chemical pattern formation. Over time, waves of oxidation (high ferriin concentration, blue) propagate into a background of low ferriin concentration (red); their structure reflects in part the history of mixing in the reaction vessel. However, it may be difficult to separate mixing effects from reaction effects. We describe a simpler model system for visualizing density-driven pattern formation in an essentially unmixed chemical system: the reaction of pale yellow Fe 3 + with colorless SCN − to form the blood-red Fe ( SCN ) 2 + complex ion in aqueous solution. Careful addition of one drop of Fe ( NO 3 ) 3 to KSCN yields striped patterns after several minutes. The patterns appear reminiscent of Rayleigh-Taylor instabilities and convection rolls, arguing that pattern formation is caused by density-driven mixing.« less

  16. Visual Search in Ecological and Non-Ecological Displays: Evidence for a Non-Monotonic Effect of Complexity on Performance

    PubMed Central

    Chassy, Philippe; Gobet, Fernand

    2013-01-01

    Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a “pop out” effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

  17. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  18. The Attentional Fields of Visual Search in Simultanagnosia and Healthy Individuals: How Object and Space Attention Interact.

    PubMed

    Khan, A Z; Prost-Lefebvre, M; Salemme, R; Blohm, G; Rossetti, Y; Tilikete, C; Pisella, L

    2016-03-01

    Simultanagnosia is a deficit in which patients are unable to perceive multiple objects simultaneously. To date, it remains disputed whether this deficit results from disrupted object or space perception. We asked both healthy participants as well as a patient with simultanagnosia to perform different visual search tasks of variable difficulty. We also modulated the number of objects (target and distracters) presented. For healthy participants, we found that each visual search task was performed with a specific "attentional field" depending on the difficulty of visual object processing but not on the number of objects falling within this "working space." This was demonstrated by measuring the cost in reaction times using different gaze-contingent visible window sizes. We found that bilateral damage to the superior parietal lobule impairs the spatial integration of separable features (within-object processing), shrinking the attentional field in which a target can be detected, but causing no deficit in processing multiple objects per se. PMID:25840422

  19. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding.

    PubMed

    Vinck, Martin; Batista-Brito, Renata; Knoblich, Ulf; Cardin, Jessica A

    2015-05-01

    Spontaneous and sensory-evoked cortical activity is highly state-dependent, yet relatively little is known about transitions between distinct waking states. Patterns of activity in mouse V1 differ dramatically between quiescence and locomotion, but this difference could be explained by either motor feedback or a change in arousal levels. We recorded single cells and local field potentials from area V1 in mice head-fixed on a running wheel and monitored pupil diameter to assay arousal. Using naturally occurring and induced state transitions, we dissociated arousal and locomotion effects in V1. Arousal suppressed spontaneous firing and strongly altered the temporal patterning of population activity. Moreover, heightened arousal increased the signal-to-noise ratio of visual responses and reduced noise correlations. In contrast, increased firing in anticipation of and during movement was attributable to locomotion effects. Our findings suggest complementary roles of arousal and locomotion in promoting functional flexibility in cortical circuits. PMID:25892300

  20. Effect of refractive error on visual evoked potentials with pattern stimulation in dogs.

    PubMed

    Ito, Yosuke; Maehara, Seiya; Itoh, Yoshiki; Matsui, Ai; Hayashi, Miri; Kubo, Akira; Uchide, Tsuyoshi

    2016-04-01

    The purpose of this study was to investigate the effects of refractive error on canine visual evoked potentials with pattern stimulation (P-VEP). Six normal beagle dogs were used. The refractive power of the recorded eyes was measured by skiascopy. The refractive power was corrected to -4 diopters (D) to +2 D using contact lens. P-VEP was recorded at each refractive power. The stimulus pattern size and distance were 50.3 arc-min and 50 cm. The P100 appeared at almost 100 msec at -2 D (at which the stimulus monitor was in focus). There was significant prolongation of the P100 implicit time at -4, -3, 0 and +1 D compared with -2 D, respectively. We concluded that the refractive power of the eye affected the P100 implicit time in canine P-VEP recording. PMID:26655769