Science.gov

Sample records for visual search patterns

  1. Drivers’ Visual Search Patterns during Overtaking Maneuvers on Freeway

    PubMed Central

    Zhang, Wenhui; Dai, Jing; Pei, Yulong; Li, Penghui; Yan, Ying; Chen, Xinqiang

    2016-01-01

    Drivers gather traffic information primarily by means of their vision. Especially during complicated maneuvers, such as overtaking, they need to perceive a variety of characteristics including the lateral and longitudinal distances with other vehicles, the speed of others vehicles, lane occupancy, and so on, to avoid crashes. The primary object of this study is to examine the appropriate visual search patterns during overtaking maneuvers on freeways. We designed a series of driving simulating experiments in which the type and speed of the leading vehicle were considered as two influential factors. One hundred and forty participants took part in the study. The participants overtook the leading vehicles just like they would usually do so, and their eye movements were collected by use of the Eye Tracker. The results show that participants’ gaze durations and saccade durations followed normal distribution patterns and that saccade angles followed a log-normal distribution pattern. It was observed that the type of leading vehicle significantly impacted the drivers’ gaze duration and gaze frequency. As the speed of a leading vehicle increased, subjects’ saccade durations became longer and saccade angles became larger. In addition, the initial and destination lanes were found to be key areas with the highest visual allocating proportion, accounting for more than 65% of total visual allocation. Subjects tended to more frequently shift their viewpoints between the initial lane and destination lane in order to search for crucial traffic information. However, they seldom directly shifted their viewpoints between the two wing mirrors. PMID:27869764

  2. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  3. Mining compact bag-of-patterns for low bit rate mobile visual search.

    PubMed

    Ji, Rongrong; Duan, Ling-Yu; Chen, Jie; Huang, Tiejun; Gao, Wen

    2014-07-01

    Visual patterns, i.e., high-order combinations of visual words, contributes to a discriminative abstraction of the high-dimensional bag-of-words image representation. However, the existing visual patterns are built upon the 2D photographic concurrences of visual words, which is ill-posed comparing with their real-world 3D concurrences, since the words from different objects or different depth might be incorrectly bound into an identical pattern. On the other hand, designing compact descriptors from the mined patterns is left open. To address both issues, in this paper, we propose a novel compact bag-of-patterns (CBoPs) descriptor with an application to low bit rate mobile landmark search. First, to overcome the ill-posed 2D photographic configuration, we build up a 3D point cloud from the reference images of each landmark, therefore more accurate pattern candidates can be extracted from the 3D concurrences of visual words. A novel gravity distance metric is then proposed to mine discriminative visual patterns. Second, we come up with compact image description by introducing a CBoPs descriptor. CBoP is figured out by sparse coding over the mined visual patterns, which maximally reconstructs the original bag-of-words histogram with a minimum coding length. We developed a low bit rate mobile landmark search prototype, in which CBoP descriptor is directly extracted and sent from the mobile end to reduce the query delivery latency. The CBoP performance is quantized in several large-scale benchmarks with comparisons to the state-of-the-art compact descriptors, topic features, and hashing descriptors. We have reported comparable accuracy to the million-scale bag-of-words histogram over the million scale visual words, with high descriptor compression rate (approximately 100-bits) than the state-of-the-art bag-of-words compression scheme.

  4. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  5. Changes in visual search patterns of pathology residents as they gain experience

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Weinstein, Ronald S.

    2011-03-01

    The goal of this study was to examine and characterize changes in the ways that pathology residents examine digital or "virtual" slides as they gain more experience. A series of 20 digitized breast biopsy virtual slides (half benign and half malignant) were shown to 6 pathology residents at three points in time - at the beginning of their first year of residency, at the beginning of the second year, and at the beginning of the third year. Their task was to examine each image and select three areas that they would most want to zoom on in order to view the diagnostic detail at higher resolution. Eye position was recorded as they scanned each image. The data indicate that with each successive year of experience, the residents' search patterns do change. Overall it takes significantly less time to view an individual slide and decide where to zoom, significantly fewer fixations are generated overall, and there is less examination of non-diagnostic areas. Essentially, the residents' search becomes much more efficient and after only one year closely resembles that of an expert pathologist. These findings are similar to those in radiology, and support the theory that an important aspect of the development of expertise is improved pattern recognition (taking in more information during the initial Gestalt or gist view) as well as improved allocation of attention and visual processing resources.

  6. Neural Network for Visual Search Classification

    DTIC Science & Technology

    2007-11-02

    neural network used to perform visual search classification. The neural network consists of a Learning vector quantization network (LVQ) and a single layer perceptron. The objective of this neural network is to classify the various human visual search patterns into predetermined classes. The classes signify the different search strategies used by individuals to scan the same target pattern. The input search patterns are quantified with respect to an ideal search pattern, determined by the user. A supervised learning rule,

  7. Understanding visual search patterns of dermatologists assessing pigmented skin lesions before and after online training.

    PubMed

    Krupinski, Elizabeth A; Chao, Joseph; Hofmann-Wellenhof, Rainer; Morrison, Lynne; Curiel-Lewandrowski, Clara

    2014-12-01

    The goal of this investigation was to explore the feasibility of characterizing the visual search characteristics of dermatologists evaluating images corresponding to single pigmented skin lesions (PSLs) (close-ups and dermoscopy) as a venue to improve training programs for dermoscopy. Two Board-certified dermatologists and two dermatology residents participated in a phased study. In phase I, they viewed a series of 20 PSL cases ranging from benign nevi to melanoma. The close-up and dermoscopy images of the PSL were evaluated sequentially and rated individually as benign or malignant, while eye position was recorded. Subsequently, the participating subjects completed an online dermoscopy training module that included a pre- and post-test assessing their dermoscopy skills (phase 2). Three months later, the subjects repeated their assessment on the 20 PSLs presented during phase I of the study. Significant differences in viewing time and eye-position parameters were observed as a function of level of expertise. Dermatologists overall have more efficient search than residents generating fewer fixations with shorter dwells. Fixations and dwells associated with decisions changing from benign to malignant or vice versa from photo to dermatoscopic viewing were longer than any other decision, indicating increased visual processing for those decisions. These differences in visual search may have implications for developing tools to teach dermatologists and residents about how to better utilize dermoscopy in clinical practice.

  8. Reconsidering Visual Search

    PubMed Central

    2015-01-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  9. Supporting Web Search with Visualization

    NASA Astrophysics Data System (ADS)

    Hoeber, Orland; Yang, Xue Dong

    One of the fundamental goals of Web-based support systems is to promote and support human activities on the Web. The focus of this Chapter is on the specific activities associated with Web search, with special emphasis given to the use of visualization to enhance the cognitive abilities of Web searchers. An overview of information retrieval basics, along with a focus on Web search and the behaviour of Web searchers is provided. Information visualization is introduced as a means for supporting users as they perform their primary Web search tasks. Given the challenge of visualizing the primarily textual information present in Web search, a taxonomy of the information that is available to support these tasks is given. The specific challenges of representing search information are discussed, and a survey of the current state-of-the-art in visual Web search is introduced. This Chapter concludes with our vision for the future of Web search.

  10. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  11. Visual search in virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.; Ezumi, Koji; Nguyen, Tho; Paul, R.; Tharp, Gregory K.; Yamashita, H. I.

    1992-08-01

    A key task in virtual environments is visual search. To obtain quantitative measures of human performance and documentation of visual search strategies, we have used three experimental arrangements--eye, head, and mouse control of viewing windows--by exploiting various combinations of helmet-mounted-displays, graphics workstations, and eye movement tracking facilities. We contrast two different categories of viewing strategies: one, for 2D pictures with large numbers of targets and clutter scattered randomly; the other for quasi-natural 3D scenes with targets and non-targets placed in realistic, sensible positions. Different searching behaviors emerge from these contrasting search conditions, reflecting different visual and perceptual modes. A regular 'searchpattern' is a systematic, repetitive, idiosyncratic sequence of movements carrying the eye to cover the entire 2D scene. Irregular 'searchpatterns' take advantages of wide windows and the wide human visual lobe; here, hierarchical detection and recognition is performed with the appropriate capabilities of the 'two visual systems'. The 'searchpath', also efficient, repetitive and idiosyncratic, provides only a small set of fixations to check continually the smaller number of targets in the naturalistic 3D scene; likely, searchpaths are driven by top-down spatial models. If the viewed object is known and able to be named, then an hypothesized, top-down cognitive model drives active looking in the 'scanpath' mode, again continually checking important subfeatures of the object. Spatial models for searchpaths may be primitive predecessors, in the evolutionary history of animals, of cognitive models for scanpaths.

  12. Binocularity and visual search-Revisited.

    PubMed

    Zou, Bochao; Utochkin, Igor S; Liu, Yue; Wolfe, Jeremy M

    2017-02-01

    Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye's images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81-93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) "popped out," producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.

  13. Spatial Selectivity in Visual Search.

    DTIC Science & Technology

    1980-10-01

    persistent finding in visual search experiments is the display size effect. In general, as the number of nontarget display characters ( distractors ...probability that at least one distractor will be mistaken as a target. This increase in "noise" in the decision process leads to decreases in detection...Shiffrin (1977) call consistent mapping (CM) in which target and distractor characters never exchange roles. CM training leads to:"automatic detection

  14. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  15. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…

  16. Interhemispheric integration in visual search

    PubMed Central

    Shipp, Stewart

    2011-01-01

    The search task of Luck, Hillyard, Mangun and Gazzaniga (1989) was optimised to test for the presence of a bilateral field advantage in the visual search capabilities of normal subjects. The modified design used geometrically regular arrays of 2, 4 or 8 items restricted to hemifields delineated by the vertical or horizontal meridian; the target, if present, appeared at one of two fixed positions per quadrant at an eccentricity of 11 deg. Group and individual performance data were analysed in terms of the slope of response time against display-size functions (‘RT slope’). Averaging performance across all conditions save display mode (bilateral vs. unilateral) revealed a significant bilateral advantage in the form of a 21% increase in apparent item scanning speed for target detection; in the absence of a target, bilateral displays gave a 5% increase in speed that was not significant. Factor analysis by ANOVA confirmed this main effect of display mode, and also revealed several higher order interactions with display geometry, indicating that the bilateral advantage was masked at certain target positions by a crowding-like effect. In a numerical model of search efficiency (i.e. RT slope), bilateral advantage was parameterised by an interhemispheric ‘transfer factor’ (T) that governs the strength of the ipsilateral representation of distractors, and modifies the level of intrahemispheric competition with the target. The factor T was found to be higher in superior field than inferior field; this result held for the modelled data of each individual subject, as well as the group, representing a uniform tendency for the bilateral advantage to be more prominent in inferior field. In fact statistical analysis and modelling of search efficiency showed that the geometrical display factors (target polar and quadrantic location, and associated crowding effects) were all remarkably consistent across subjects. Greater variability was inferred within a fixed, decisional

  17. Distributed Search and Pattern Matching

    NASA Astrophysics Data System (ADS)

    Ahmed, Reaz; Boutaba, Raouf

    Peer-to-peer (P2P) technology has triggered a wide range of distributed applications including file-sharing, distributed XML databases, distributed computing, server-less web publishing and networked resource/service sharing. Despite of the diversity in application, these systems share common requirements for searching due to transitory nodes population and content volatility. In such dynamic environment, users do not have the exact information about available resources. Queries are based on partial information. This mandates the search mechanism to be emphflexible. On the other hand, the search mechanism is required to be bandwidth emphefficient to support large networks. Variety of search techniques have been proposed to provide satisfactory solution to the conflicting requirements of search efficiency and flexibility. This chapter highlights the search requirements in large scale distributed systems and the ability of the existing distributed search techniques in satisfying these requirements. Representative search techniques from three application domains, namely, P2P content sharing, service discovery and distributed XML databases, are considered. An abstract problem formulation called Distributed Pattern Matching (DPM) is presented as well. The DPM framework can be used as a common ground for addressing the search problem in these three application domains.

  18. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  19. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  20. Visual Scan Adaptation During Repeated Visual Search

    DTIC Science & Technology

    2010-01-01

    repeated distractor –target configurations both require environmental stability. For stable distractor – target configurations, Chun and Jiang (1998) have...demon- strated search time savings from repeating distractor –target configurations, and Song and Jiang (2005) demonstrated that as little as 25% of the...search environment (i.e., two distractor locations and the target location out of 12 total locations per trial) repeated from trial to trial resulted

  1. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  2. Individual differences predict low prevalence visual search performance.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-01-01

    Critical real-world visual search tasks such as radiology and baggage screening rely on the detection of rare targets. When targets are rare, observers search for a relatively short amount of time and have a high miss rate, a pattern of results known as the low prevalence effect. Attempts to improve the search for rare targets have been unsuccessful or resulted in an increase in detections at the price of more false alarms. As an alternative to improving visual search performance through experimental manipulations, an individual differences approach found that those with higher working memory capacity were better at finding rare targets. We build on the individual differences approach and assess 141 observers' visual working memory capacity (vWMC), vigilance, attentional control, big five personality traits, and performance in both high and low prevalence search tasks. vWMC, vigilance, attentional control, high prevalence visual search performance, and level of introversion were all significant predictors of low prevalence search accuracy, and together account for more than 50% of the variance in search performance. With the exception of vigilance, these factors are also significant predictors of reaction time; better performance was associated with longer reaction times, suggesting these factors identify observers who maintain relatively high quitting thresholds, even with low target prevalence. Our results suggest that a quick and easy-to-administer battery of tasks can identify observers who are likely to perform well in low prevalence search tasks, and these predictor variables are associated with higher quitting thresholds, leading to higher accuracy.

  3. Visual Thinking Patterns. [Teaching Guide.

    ERIC Educational Resources Information Center

    Crosbie, Helen

    Theories and techniques for fostering creativity are described because all students, regardless of intelligence or talent, have artistic ability that should be developed. Four basic visual viewpoints have been identified: the expressive colorist, the hands-on formist, the neat observant designer, and the pattern-oriented draftsperson. These visual…

  4. Visual Testing: Searching for Guidelines.

    ERIC Educational Resources Information Center

    Van Gendt, Kitty; Verhagen, Plon

    An experiment was conducted to investigate the influence of the variables "realism" and "context" on the performance of biology students on a visual test about the anatomy of a rat. The instruction was primarily visual with additional verbal information like Latin names and practical information about the learning task: dissecting a rat to gain…

  5. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  6. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  7. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  8. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  9. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  10. The effect of visualization on visual search performance : Does visualization trump vision?

    PubMed

    F Clarke, Alasdair D; Barr, Courtney; Hunt, Amelia R

    2016-11-01

    Striking results recently demonstrated that visualizing search for a target can facilitate visual search for that target on subsequent trials (Reinhart et al. 2015). This visualization benefit was even greater than the benefit of actually repeating search for the target. We registered a close replication and generalization of the original experiment. Our results show clear benefits of repeatedly searching for the same target, but we found no benefit associated with visualization. The difficulty of the search task and the ability to monitor compliance with instructions to visualize are both possible explanations for the failure to replicate, and both should be carefully considered in future research exploring this interesting phenomenon.

  11. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  12. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  13. Words, shape, visual search and visual working memory in 3-year-old children

    PubMed Central

    Vales, Catarina; Smith, Linda B.

    2014-01-01

    Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  14. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information.

  15. Visual acuity, color vision, and visual search performance at sea.

    PubMed

    Donderi, D C

    1994-03-01

    Visual acuity and color vision were tested during a search and rescue exercise at sea. Fifty-seven watchkeepers searched for orange and yellow life rafts during daylight and for lighted and unlighted life rafts at night with night vision goggles. There were 588 individual watches of one hour each. Measures of wind, waves, and weather were used as covariates. Daytime percentage detection was positively correlated with low-contrast visual acuity and negatively correlated with error scores on Dvorine pseudoisochromatic plates and the Farnsworth color test. Performance was better during the first half-hour of the watch. Efficiency calculations show that color vision selective screening at one standard deviation above the mean would increase daylight search performance by 10% and that one standard deviation visual acuity selection screening would increase performance by 12%. There was no relationship between either acuity or color vision and life raft detection using night vision goggles.

  16. Functional brain organization of preparatory attentional control in visual search.

    PubMed

    Bourke, Patrick; Brown, Steven; Ngan, Elton; Liotti, Mario

    2013-09-12

    Looking for an object that may be present in a cluttered visual display requires an advanced specification of that object to be created and then matched against the incoming visual input. Here, fast event-related fMRI was used to identify the brain networks that are active when preparing to search for a visual target. By isolating the preparation phase of the task it has been possible to show that for an identical stimulus, different patterns of cortical activation occur depending on whether participants anticipate a 'feature' or a 'conjunction' search task. When anticipating a conjunction search task, there was more robust activation in ventral occipital areas, new activity in the transverse occipital sulci and right posterior intraparietal sulcus. In addition, preparing for either type of search activated ventral striatum and lateral cerebellum. These results suggest that when participants anticipate a demanding search task, they develop a different advanced representation of a visually identical target stimulus compared to when they anticipate a nondemanding search.

  17. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2016-07-19

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  18. Visualizing Search Behavior with Adaptive Discriminations

    PubMed Central

    Cook, Robert G.; Qadri, Muhammad A. J.

    2014-01-01

    We examined different aspects of the visual search behavior of a pigeon using an open-ended, adaptive testing procedure controlled by a genetic algorithm. The animal had to accurately search for and peck a gray target element randomly located from among a variable number of surrounding darker and lighter distractor elements. Display composition was controlled by a genetic algorithm involving the multivariate configuration of different parameters or genes (number of distractors, element size, shape, spacing, target brightness, and distractor brightness). Sessions were composed of random displays, testing randomized combinations of these genes, and selected displays, representing the varied descendants of displays correctly identified by the pigeon. Testing a larger number of random displays than done previously, it was found that the bird’s solution to the search task was highly stable and did not change with extensive experience in the task. The location and shape of this attractor was visualized using multivariate behavioral surfaces in which element size and the number of distractors were the most important factors controlling search accuracy and search time. The resulting visualizations of the bird’s search behavior are discussed with reference to the potential of using adaptive, open-ended experimental techniques for investigating animal cognition and their implications for Bond and Kamil’s innovative development of virtual ecologies using an analogous methodology. PMID:24370702

  19. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  20. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  1. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  2. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  3. Visual search strategy and perceptual organization covary with individual preference and structural complexity.

    PubMed

    Hogeboom, M; van Leeuwen, C

    1997-02-01

    The pattern of interactions between visual search profile (serial versus nonserial) and perceptual organization type (functionally local versus global) was investigated in three experiments. The task was a matching judgment performed on two separate figures that fit in a jigsaw-puzzle fashion. Reaction times and errors showed that serial search preferably goes together with local organization and nonserial search with global organization. Choice of search profile and organization type depend on task and individual preference. Structural complexity in the target area reinforces the combination of local organization and serial search. This combination was also chosen by subjects preferring accuracy. The combination of global organization and nonserial search was chosen with simpler targets or by subjects preferring speed. The results support an interactive notion of perceptual organization and search. Perceptual organization type is determined by individual preferences in combination with visual search task demands; visual search is guided by the specific organization of the stimulus pattern.

  4. Cultural Differences in Visual Search for Geometric Figures.

    PubMed

    Ueda, Yoshiyuki; Chen, Lei; Kopecky, Jonathon; Cramer, Emily S; Rensink, Ronald A; Meyer, David E; Kitayama, Shinobu; Saiki, Jun

    2017-03-25

    While some studies suggest cultural differences in visual processing, others do not, possibly because the complexity of their tasks draws upon high-level factors that could obscure such effects. To control for this, we examined cultural differences in visual search for geometric figures, a relatively simple task for which the underlying mechanisms are reasonably well known. We replicated earlier results showing that North Americans had a reliable search asymmetry for line length: Search for long among short lines was faster than vice versa. In contrast, Japanese participants showed no asymmetry. This difference did not appear to be affected by stimulus density. Other kinds of stimuli resulted in other patterns of asymmetry differences, suggesting that these are not due to factors such as analytic/holistic processing but are based instead on the target-detection process. In particular, our results indicate that at least some cultural differences reflect different ways of processing early-level features, possibly in response to environmental factors.

  5. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  6. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-02-11

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  7. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  8. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  9. Top-down visual search in Wimmelbild

    NASA Astrophysics Data System (ADS)

    Bergbauer, Julia; Tari, Sibel

    2013-03-01

    Wimmelbild which means "teeming figure picture" is a popular genre of visual puzzles. Abundant masses of small figures are brought together in complex arrangements to make one scene in a Wimmelbild. It is picture hunt game. We discuss what type of computations/processes could possibly underlie the solution of the discovery of figures that are hidden due to a distractive influence of the context. One thing for sure is that the processes are unlikely to be purely bottom-up. One possibility is to re-arrange parts and see what happens. As this idea is linked to creativity, there are abundant examples of unconventional part re-organization in modern art. A second possibility is to define what to look for. That is to formulate the search as a top-down process. We address top-down visual search in Wimmelbild with the help of diffuse distance and curvature coding fields.

  10. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  11. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  12. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  13. An active visual search interface for Medline.

    PubMed

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan

    2007-01-01

    Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz.

  14. A Visual Search Tool for Early Elementary Science Students.

    ERIC Educational Resources Information Center

    Revelle, Glenda; Druin, Allison; Platner, Michele; Bederson, Ben; Hourcade, Juan Pablo; Sherman, Lisa

    2002-01-01

    Reports on the development of a visual search interface called "SearchKids" to support children ages 5-10 years in their efforts to find animals in a hierarchical information structure. Investigates whether children can construct search queries to conduct complex searches if sufficiently supported both visually and conceptually. (Contains 27…

  15. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  16. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  17. Cardiac and Respiratory Responses During Visual Search in Nonretarded Children and Retarded Adolescents

    ERIC Educational Resources Information Center

    Porges, Stephen W.; Humphrey, Mary M.

    1977-01-01

    The relationship between physiological response patterns and mental competence was investigated by evaluating heart rate and respiratory responses during a sustained visual-search task in 29 nonretarded grade school children and 16 retarded adolescents. (Author)

  18. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  19. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  20. Pattern Recognition For Automatic Visual Inspection

    NASA Astrophysics Data System (ADS)

    Fu, K. S.

    1982-11-01

    Three major approaches to pattern recognition, (1) template matching, (2) decision-theoretic approach, and (3) structural and syntactic approach, are briefly introduced. The application of these approaches to automatic visual inspection of manufactured products are then reviewed. A more general method for automatic visual inspection of IC chips is then proposed. Several practical examples are included for illustration.

  1. Signatures of chaos in animal search patterns.

    PubMed

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-03-29

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties.

  2. Signatures of chaos in animal search patterns

    PubMed Central

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  3. Investigating attention in complex visual search

    PubMed Central

    Kovach, Christopher K.; Adolphs, Ralph

    2015-01-01

    How we attend to and search for objects in the real world is influenced by a host of low-level and higher-level factors whose interactions are poorly understood. The vast majority of studies approach this issue by experimentally controlling one or two factors in isolation, often under conditions with limited ecological validity. We present a comprehensive regression framework, together with a matlab-implemented toolbox, which allows concurrent factors influencing saccade targeting to be more clearly distinguished. Based on the idea of gaze selection as a point process, the framework allows each putative factor to be modeled as a covariate in a generalized linear model, and its significance to be evaluated with model-based hypothesis testing. We apply this framework to visual search for faces as an example and demonstrate its power in detecting effects of eccentricity, inversion, task congruency, emotional expression, and serial fixation order on the targeting of gaze. Among other things, we find evidence for multiple goal-related and goal-independent processes that operate with distinct visuotopy and time course. PMID:25499190

  4. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  5. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  6. Visual search and eye movements in novel and familiar contexts

    NASA Astrophysics Data System (ADS)

    McDermott, Kyle; Mulligan, Jeffrey B.; Bebis, George; Webster, Michael A.

    2006-02-01

    Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.

  7. Innate Visual Learning through Spontaneous Activity Patterns

    PubMed Central

    Albert, Mark V.; Schnabel, Adam; Field, David J.

    2008-01-01

    Patterns of spontaneous activity in the developing retina, LGN, and cortex are necessary for the proper development of visual cortex. With these patterns intact, the primary visual cortices of many newborn animals develop properties similar to those of the adult cortex but without the training benefit of visual experience. Previous models have demonstrated how V1 responses can be initialized through mechanisms specific to development and prior to visual experience, such as using axonal guidance cues or relying on simple, pairwise correlations on spontaneous activity with additional developmental constraints. We argue that these spontaneous patterns may be better understood as part of an “innate learning” strategy, which learns similarly on activity both before and during visual experience. With an abstraction of spontaneous activity models, we show how the visual system may be able to bootstrap an efficient code for its natural environment prior to external visual experience, and we continue the same refinement strategy upon natural experience. The patterns are generated through simple, local interactions and contain the same relevant statistical properties of retinal waves and hypothesized waves in the LGN and V1. An efficient encoding of these patterns resembles a sparse coding of natural images by producing neurons with localized, oriented, bandpass structure—the same code found in early visual cortical cells. We address the relevance of higher-order statistical properties of spontaneous activity, how this relates to a system that may adapt similarly on activity prior to and during natural experience, and how these concepts ultimately relate to an efficient coding of our natural world. PMID:18670593

  8. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  9. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  10. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  11. A consistent but non-coincident visual pattern facilitates the learning of spatial relations among locations.

    PubMed

    Katz, Scott S; Brown, Michael F; Sturz, Bradley R

    2014-02-01

    Human participants searched in a dynamic three-dimensional computer-generated virtual-environment open-field search task for four hidden goal locations arranged in a diamond configuration located in a 5 × 5 matrix of raised bins. Participants were randomly assigned to one of two groups: visual pattern or visual random. All participants experienced 30 trials in which four goal locations maintained the same spatial relations to each other (i.e., a diamond pattern), but this diamond pattern moved to random locations within the 5 × 5 matrix from trial to trial. For participants in the visual pattern group, four locations were marked in a distinct color and arranged in a diamond pattern that moved to a random location independent of the hidden spatial pattern from trial to trial throughout the experimental session. For participants in the visual random group, four random locations were marked with a distinct color and moved to random locations independent from the hidden spatial pattern from trial to trial throughout the experimental session. As a result, the visual cues for the visual pattern group were consistent but not coincident with the hidden spatial pattern, whereas the visual cues for the visual random group were neither consistent nor coincident with the hidden spatial pattern. Results indicated that participants in both groups learned the spatial configuration of goal locations and that the presence of consistent but noncoincident visual cues facilitated the learning of spatial relations among locations.

  12. [Responses of squirrel visual cortex neurons to patterned visual stimuli].

    PubMed

    Supin, A Ia

    1975-01-01

    The responses of visual cortical neurons to patterned visual stimuli were studied in squirrel Sciurus vulgaris. The direction selective, orientation-selective and non-selective neurons were observed. Most direction-selective and non-selective neurons were sensitive to high speeds of stimulus movement--hundreds deg/s. The direction-selective neurons exhibited their selectivity at such high speeds in spite of the short time of the stimulus movement through the receptive field. Orientation-selective neurons (with simple or complex receptive fields) were sensitive to lower speeds of the stimulus movement (tens deg/s). Some mechanisms of the properties described are discussed.

  13. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  14. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  15. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.

  16. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly.

  17. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  18. Searching for intellectual turning points: Progressive knowledge domain visualization

    PubMed Central

    Chen, Chaomei

    2004-01-01

    This article introduces a previously undescribed method progressively visualizing the evolution of a knowledge domain's cocitation network. The method first derives a sequence of cocitation networks from a series of equal-length time interval slices. These time-registered networks are merged and visualized in a panoramic view in such a way that intellectually significant articles can be identified based on their visually salient features. The method is applied to a cocitation study of the superstring field in theoretical physics. The study focuses on the search of articles that triggered two superstring revolutions. Visually salient nodes in the panoramic view are identified, and the nature of their intellectual contributions is validated by leading scientists in the field. The analysis has demonstrated that a search for intellectual turning points can be narrowed down to visually salient nodes in the visualized network. The method provides a promising way to simplify otherwise cognitively demanding tasks to a search for landmarks, pivots, and hubs. PMID:14724295

  19. A novel visualization model for web search results.

    PubMed

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  20. Visual search demands dictate reliance on working memory storage.

    PubMed

    Luria, Roy; Vogel, Edward K

    2011-04-20

    Previous research suggested that working memory (WM) does not play any significant role in visual search. In three experiments, we investigated the search difficulty and individual differences in WM capacity as determinants of WM involvement during visual search tasks, using both behavioral and electrophysiological markers [i.e., the contralateral delay activity (CDA), which is a marker for WM capacity allocation]. Human participants performed a visual search task that contained a target, neutral distractors, and a flanker distractor. Overall, we found that, as the search difficulty increased (as indicated by longer reaction times), so did the role of WM in performing the search task (as indicated by larger CDA amplitudes). Moreover, the results pinpoint a dissociation between the two types of factors that determined the WM involvement in the search process. Namely, individual differences in WM capacity and search difficulty independently affected the degree to which the search process relied on WM. Instead of showing a progressive role, individual differences in WM capacity correlated with the search efficiency in all search conditions (i.e., easy, medium, and difficult). Counterintuitively, individuals with high WM capacity generally relied less on WM during the search task.

  1. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and…

  2. Numerosity estimates for attended and unattended items in visual search.

    PubMed

    Kelley, Troy D; Cassenti, Daniel N; Marusich, Laura R; Ghirardelli, Thomas G

    2017-03-20

    The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants' only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.

  3. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  4. 'Where' and 'what' in visual search.

    PubMed

    Atkinson, J; Braddick, O J

    1989-01-01

    A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them.

  5. Development of Anticipatory Visual Search in One-Year-Olds.

    ERIC Educational Resources Information Center

    Shimada, Shoko; Sano, Ryogoro

    The purpose of this study was to longitudinally examine the development of anticipatory visual search and to find out the effects of preceding experiences upon the search during the second year of life. The sample consisted of 18 Japanese firstborn nonretarded children from middle-class families who were individually tested at 11, 12, 14, 16, 22,…

  6. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  7. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  8. When are abrupt onsets found efficiently in complex visual search? Evidence from multielement asynchronous dynamic search.

    PubMed

    Kunar, Melina A; Watson, Derrick G

    2014-02-01

    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting--conditions known to produce attentional capture in simpler visual search tasks--captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis.

  9. Individual differences and metacognitive knowledge of visual search strategy.

    PubMed

    Proulx, Michael J

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the 'template' of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions.

  10. Individual Differences and Metacognitive Knowledge of Visual Search Strategy

    PubMed Central

    Proulx, Michael J.

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions. PMID:22066030

  11. Testing the controllability of contextual cuing of visual search

    PubMed Central

    Luque, David; Vadillo, Miguel A.; Lopez, Francisco J.; Alonso, Rafael; Shanks, David R.

    2017-01-01

    Locating a target among distractors improves when the configuration of distractors consistently cues the target’s location across search trials, an effect called contextual cuing of visual search (CC). The important issue of whether CC is automatic has previously been studied by asking whether it can occur implicitly (outside awareness). Here we ask the novel question: is CC of visual search controllable? In 3 experiments participants were exposed to a standard CC procedure during Phase 1. In Phase 2, they localized a new target, embedded in configurations (including the previous target) repeated from Phase 1. Despite robust contextual cuing, congruency effects – which would imply the orientation of attention towards the old target in repeated configurations – were found in none of the experiments. The results suggest that top-down control can be exerted over contextually-guided visual search. PMID:28045108

  12. A ‘snapshot’ of the visual search behaviours of medical sonographers

    PubMed Central

    Brennan, Patrick C; Pietrzyk, Mariusz; Clarke, Jillian; Chekaluk, Eugene

    2015-01-01

    Abstract Introduction: Visual search is a task that humans perform in everyday life. Whether it involves looking for a pen on a desk or a mass in a mammogram, the cognitive and perceptual processes that underpin these tasks are identical. Radiologists are experts in visual search of medical images and studies on their visual search behaviours have revealed some interesting findings with regard to diagnostic errors. In Australia, within the modality of ultrasound, sonographers perform the diagnostic scan, select images and present to the radiologist for reporting. Therefore the visual task and potential for errors is similar to a radiologist. Our aim was to explore and understand the detection, localisation and eye‐gaze behaviours of a group of qualified sonographers. Method: We measured clinical performance and analysed diagnostic errors by presenting fifty sonographic breast images that varied on cancer present and degree of difficulty to a group of sonographers in their clinical workplace. For a sub‐set of sonographers we obtained eye‐tracking metrics such as time‐to‐first fixation, total visit duration and cumulative dwell time heat maps. Results: The results indicate that the sonographers' clinical performance was high and the eye‐tracking metrics showed diagnostic error types similar to those found in studies on radiologist visual search. Conclusion: This study informs us about sonographer visual search patterns and highlights possible ways to improve diagnostic performance via targeted education. PMID:28191244

  13. Human visual search: a two state process

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    In searching a field of view for an object of interest, observers appear to alternate between two states: wandering (rapid saccades) and examining (focusing on an attractive region). This observation is made based on eye tracker measurements and is consistent with the model proposed by J. F. Nicoll, which describes search as a competition between points of interest for the observer's attention. In this paper search is represented as a random process -- a random walk in which the observers exist in one of two states until they quit: they are either searching or wandering around looking for a point of interest. When wandering, the observers skip rapidly from point to point. When examining, they move more slowly, because detection discrimination requires additional or different thought processes. An interesting consequence of the two state approach is that the random walk must have two time constants -- the time constant for fast (wandering) movements and a different time constant for slow (examining) movements. We describe a technique which can be used to separate raw eye tracker data collected in a search experiment into the wandering and examining states. Then we postulate the relationship of the probability of wandering (or examining) to the attractiveness of the image. We use a clutter metric to estimate the relative attractiveness of the target and the competing clutter. We find that the clutter metric predicts fairly well the time spent in the two states.

  14. Rapid and implicit effects of color category on visual search

    NASA Astrophysics Data System (ADS)

    Yokoi, Kenji; Watanabe, Katsumi; Saida, Shinya

    2012-07-01

    Many studies suggest that the color category influences visual perception. It is also well known that oculomotor control and visual attention are closely linked. In order to clarify temporal characteristics of color categorization, we investigated eye movements during color visual search. Eight color disks were presented briefly for 20-320 ms, and the subject was instructed to gaze at a target shown prior to the trial. We found that the color category of the target modulated eye movements significantly when the stimulus was displayed for more than 40 ms and the categorization could be completed within 80 ms. With the 20 ms presentation, the search performance was at a chance level, however, the first saccadic latency suggested that the color category had an effect on visual attention. These results suggest that color categorization affects the guidance of visual attention rapidly and implicitly.

  15. A neural basis for real-world visual search in human occipitotemporal cortex.

    PubMed

    Peelen, Marius V; Kastner, Sabine

    2011-07-19

    Mammals are highly skilled in rapidly detecting objects in cluttered natural environments, a skill necessary for survival. What are the neural mechanisms mediating detection of objects in natural scenes? Here, we use human brain imaging to address the role of top-down preparatory processes in the detection of familiar object categories in real-world environments. Brain activity was measured while participants were preparing to detect highly variable depictions of people or cars in natural scenes that were new to the participants. The preparation to detect objects of the target category, in the absence of visual input, evoked activity patterns in visual cortex that resembled the response to actual exemplars of the target category. Importantly, the selectivity of multivoxel preparatory activity patterns in object-selective cortex (OSC) predicted target detection performance. By contrast, preparatory activity in early visual cortex (V1) was negatively related to search performance. Additional behavioral results suggested that the dissociation between OSC and V1 reflected the use of different search strategies, linking OSC preparatory activity to relatively abstract search preparation and V1 to more specific imagery-like preparation. Finally, whole-brain searchlight analyses revealed that, in addition to OSC, response patterns in medial prefrontal cortex distinguished the target categories based on the search cues alone, suggesting that this region may constitute a top-down source of preparatory activity observed in visual cortex. These results indicate that in naturalistic situations, when the precise visual characteristics of target objects are not known in advance, preparatory activity at higher levels of the visual hierarchy selectively mediates visual search.

  16. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  17. Visual Search and the Collapse of Categorization

    ERIC Educational Resources Information Center

    David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

    2005-01-01

    Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

  18. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  19. History effects in visual search for monsters: search times, choice biases, and liking.

    PubMed

    Chetverikov, Andrey; Kristjansson, Árni

    2015-02-01

    Repeating targets and distractors on consecutive visual search trials facilitates search performance, whereas switching targets and distractors harms search. In addition, search repetition leads to biases in free choice tasks, in that previously attended targets are more likely to be chosen than distractors. Another line of research has shown that attended items receive high liking ratings, whereas ignored distractors are rated negatively. Potential relations between the three effects are unclear, however. Here we simultaneously measured repetition benefits and switching costs for search times, choice biases, and liking ratings in color singleton visual search for "monster" shapes. We showed that if expectations from search repetition are violated, targets are liked to be less attended than otherwise. Choice biases were, on the other hand, affected by distractor repetition, but not by target/distractor switches. Target repetition speeded search times but had little influence on choice or liking. Our findings suggest that choice biases reflect distractor inhibition, and liking reflects the conflict associated with attending to previously inhibited stimuli, while speeded search follows both target and distractor repetition. Our results support the newly proposed affective-feedback-of-hypothesis-testing account of cognition, and additionally, shed new light on the priming of visual search.

  20. Impaired serial visual search in children with developmental dyslexia.

    PubMed

    Sireteanu, Ruxandra; Goebel, Claudia; Goertz, Ralf; Werner, Ingeborg; Nalewajko, Magdalena; Thiel, Aylin

    2008-12-01

    In order to test the hypothesis of attentional deficits in dyslexia, we investigated the performance of children with developmental dyslexia on a number of visual search tasks. When tested with conjunction tasks for orientation and form using complex, letter-like material, dyslexic children showed an increased number of errors accompanied by faster reaction times in comparison to control children matched to the dyslexics on age, gender, and intelligence. On conjunction tasks for orientation and color, dyslexic children were also less accurate, but showed slower reaction times than the age-matched control children. These differences between the two groups decreased with increasing age. In contrast to these differences, the performance of dyslexic children in feature search tasks was similar to that of control children. These results suggest that children with developmental dyslexia present selective deficits in complex serial visual search tasks, implying impairment in goal-directed, sustained visual attention.

  1. Eye Movements and Visual Search: A Bibliography,

    DTIC Science & Technology

    1983-01-01

    594-598. VIS, AID 117 Carpenter, R.II.S. Oculomotor Procrastination . In: D.F. Fisher, R.A., Monty, J.W., Senders (Eds). Eye Movements Cognition and... Affect and Conition: The 17th Annual Carnegie Symposium on Cognition . Lawrence Erlbaum, 1982, New Jersey. PSY, BOK 134 Clement, W.F.; Graham, D...8217. IMQ: ’Image quality’. Objuctive measures. IND: ’Individual differences ’. Inter-subject variance in vision and visual task performance. INS

  2. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  3. Segmentation by depth does not always facilitate visual search.

    PubMed

    Finlayson, Nonie J; Remington, Roger W; Retell, James D; Grove, Philip M

    2013-07-11

    In visual search, target detection times are relatively insensitive to set size when targets and distractors differ on a single feature dimension. Search can be confined to only those elements sharing a single feature, such as color (Egeth, Virzi, & Garbart, 1984). These findings have been taken as evidence that elementary feature dimensions support a parallel segmentation of a scene into discrete sets of items. Here we explored if relative depth (signaled by binocular disparity) could support a similar parallel segmentation by examining the effects of distributing distracting elements across two depth planes. Three important empirical findings emerged. First, when the target was a feature singleton on the target depth plane, but a conjunction search among distractors on the nontarget plane, search efficiency increased compared to a single depth plane. Second, benefits of segmentation in depth were only observed when the target depth plane was known in advance. Third, no benefit of segmentation in depth was observed when both planes required a conjunction search, even with prior knowledge of the target depth plane. Overall, the benefit of distributing the elements of a search set across two depth planes was observed only when the two planes differed both in binocular disparity and in the elementary feature composition of individual elements. We conclude that segmentation of the search array into two depth planes can facilitate visual search, but unlike color or other elementary properties, does not provide an automatic, preattentive segmentation.

  4. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  5. Color, form and luminance capture attention in visual search.

    PubMed

    Turatto, M; Galfano, G

    2000-01-01

    Extant models of visual attention predict that a salient element should produce a bottom-up activation leading to a stimulus-driven attentional capture (e.g. Cave, 1999). However, apart from onset, previous works manipulating set-size in visual search failed to provide empirical evidence for this kind of capture. By varying target-singelton distance method, based on a single set-size, we explored whether, in a serial search task, an attentional capture is triggered by static discontinuities such as those generated through the manipulation of color, form, and luminance. The results suggest that those physical properties are indeed able to capture attention automatically.

  6. Learned face-voice pairings facilitate visual search

    PubMed Central

    Zweig, L. Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2014-01-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  7. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.

  8. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  9. Multiple mobile robots real-time visual search algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Caixia; Zhan, Qiang

    2010-08-01

    A multiple mobile robots visual real-time locating system is introduced, in which the global search algorithm and track search algorithm are combined together to identify the real-time position and orientation(pose) of multiple mobile robots. The switching strategy between the two algorithms is given to ensure the accuracy and improve retrieval speed. The grid search approach is used to identify target while searching globally. By checking the location in the previous frame, the maximum speed and the frame time interval, thus the track search can determine the area target robot may appear in the next frame. Then, a new search will be performed in the certain area. The global search is used if target robot is not found in the previous search otherwise track search will be used. With the experiment on the static and dynamic recognition of three robots, the search method here is proved to be high precise, fast, stable and easy to extend, all the design requirements can be well met.

  10. Visual search deficits are independent of magnocellular deficits in dyslexia.

    PubMed

    Wright, Craig M; Conlon, Elizabeth G; Dyck, Murray

    2012-04-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower serial search times than a control group of typical readers. However, the effect size was small (η(p)(2)  = 0.05) indicating considerable overlap between the groups. When the dyslexia sample was split into those with or without a magnocellular deficit, there was no difference in visual search reaction time between either group and controls. The data suggest that magnocellular sensitivity and visual spatial attention weaknesses are independent of one another. They also provide more evidence of heterogeneity in response to psychophysical tasks in groups with dyslexia. Alternative explanations for poor performance on visual attention tasks are proposed along with avenues for future research.

  11. Eye-Search: A web-based therapy that improves visual search in hemianopia

    PubMed Central

    Ong, Yean-Hoon; Jacquin-Courtois, Sophie; Gorgoraptis, Nikos; Bays, Paul M; Husain, Masud; Leff, Alexander P

    2015-01-01

    Persisting hemianopia frequently complicates lesions of the posterior cerebral hemispheres, leaving patients impaired on a range of key activities of daily living. Practice-based therapies designed to induce compensatory eye movements can improve hemianopic patients' visual function, but are not readily available. We used a web-based therapy (Eye-Search) that retrains visual search saccades into patients' blind hemifield. A group of 78 suitable hemianopic patients took part. After therapy (800 trials over 11 days), search times into their impaired hemifield improved by an average of 24%. Patients also reported improvements in a subset of visually guided everyday activities, suggesting that Eye-Search therapy affects real-world outcomes. PMID:25642437

  12. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  13. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.

  14. Vigilance, visual search and attention in an agricultural task.

    PubMed

    Hartley, L R; Arnold, P K; Kobryn, H; Macleod, C

    1989-03-01

    In a fragile agricultural environment, such as Western Australia (WA), introduced exotic plant species present a serious environmental and economic threat. Skeleton weed, centaurea juncea, a Mediterranean daisy, was accidentally introduced into WA in 1963. It competes with cash crops such as wheat. When observed in the fields, farms are quarantined and mechanised teams search for the infestations in order to destroy them. Since the search process requires attention, visual search and vigilance, the present investigators conducted a number of controlled field trials to identify the importance of these factors in detection of the weed. The paper describes the basic hit rate, vigilance decrement, effect of search party size, effect of target size, and some data on the effect of solar illumination of the target. Several recommendations have been made and incorporated in the search programme and some laboratory studies undertaken to answer questions arising.

  15. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion.

  16. LASAGNA-Search: an integrated web tool for transcription factor binding site search and visualization.

    PubMed

    Lee, Chic; Huang, Chun-Hsi

    2013-03-01

    The release of ChIP-seq data from the ENCyclopedia Of DNA Elements (ENCODE) and Model Organism ENCyclopedia Of DNA Elements (modENCODE) projects has significantly increased the amount of transcription factor (TF) binding affinity information available to researchers. However, scientists still routinely use TF binding site (TFBS) search tools to scan unannotated sequences for TFBSs, particularly when searching for lesser-known TFs or TFs in organisms for which ChIP-seq data are unavailable. The sequence analysis often involves multiple steps such as TF model collection, promoter sequence retrieval, and visualization; thus, several different tools are required. We have developed a novel integrated web tool named LASAGNA-Search that allows users to perform TFBS searches without leaving the web site. LASAGNA-Search uses the LASAGNA (Length-Aware Site Alignment Guided by Nucleotide Association) algorithm for TFBS alignment. Important features of LASAGNA-Search include (i) acceptance of unaligned variable-length TFBSs, (ii) a collection of 1726 TF models, (iii) automatic promoter sequence retrieval, (iv) visualization in the UCSC Genome Browser, and (v) gene regulatory network inference and visualization based on binding specificities. LASAGNA-Search is freely available at http://biogrid.engr.uconn.edu/lasagna_search/.

  17. Evolutionary Visual Exploration: Evaluation of an IEC Framework for Guided Visual Search.

    PubMed

    Boukhelifa, N; Bezerianos, A; Cancino, W; Lutton, E

    2017-01-01

    We evaluate and analyse a framework for evolutionary visual exploration (EVE) that guides users in exploring large search spaces. EVE uses an interactive evolutionary algorithm to steer the exploration of multidimensional data sets toward two-dimensional projections that are interesting to the analyst. Our method smoothly combines automatically calculated metrics and user input in order to propose pertinent views to the user. In this article, we revisit this framework and a prototype application that was developed as a demonstrator, and summarise our previous study with domain experts and its main findings. We then report on results from a new user study with a clearly predefined task, which examines how users leverage the system and how the system evolves to match their needs. While we previously showed that using EVE, domain experts were able to formulate interesting hypotheses and reach new insights when exploring freely, our new findings indicate that users, guided by the interactive evolutionary algorithm, are able to converge quickly to an interesting view of their data when a clear task is specified. We provide a detailed analysis of how users interact with an evolutionary algorithm and how the system responds to their exploration strategies and evaluation patterns. Our work aims at building a bridge between the domains of visual analytics and interactive evolution. The benefits are numerous, in particular for evaluating interactive evolutionary computation (IEC) techniques based on user study methodologies.

  18. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  19. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  20. Perceptual basis of redundancy gains in visual pop-out search.

    PubMed

    Töllner, Thomas; Zehetleitner, Michael; Krummenacher, Joseph; Müller, Hermann J

    2011-01-01

    The redundant-signals effect (RSE) refers to a speed-up of RT when the response is triggered by two, rather than just one, response-relevant target elements. Although there is agreement that in the visual modality RSEs observed with dimensionally redundant signals originating from the same location are generated by coactive processing architectures, there has been a debate as to the exact stage(s)--preattentive versus postselective--of processing at which coactivation arises. To determine the origin(s) of redundancy gains in visual pop-out search, the present study combined mental chronometry with electrophysiological markers that reflect purely preattentive perceptual (posterior-contralateral negativity [PCN]), preattentive and postselective perceptual plus response selection-related (stimulus-locked lateralized readiness potential [LRP]), or purely response production-related processes (response-locked LRP). As expected, there was an RSE on target detection RTs, with evidence for coactivation. At the electrophysiological level, this pattern was mirrored by an RSE in PCN latencies, whereas stimulus-locked LRP latencies showed no RSE over and above the PCN effect. Also, there was no RSE on the response-locked LRPs. This pattern demonstrates a major contribution of preattentive perceptual processing stages to the RSE in visual pop-out search, consistent with parallel-coactive coding of target signals in multiple visual dimensions [Müller, H. J., Heller, D., & Ziegler, J. Visual search for singleton feature targets within and across feature dimensions.

  1. Eye movements during visual search in patients with glaucoma

    PubMed Central

    2012-01-01

    Background Glaucoma has been shown to lead to disability in many daily tasks including visual search. This study aims to determine whether the saccadic eye movements of people with glaucoma differ from those of people with normal vision, and to investigate the association between eye movements and impaired visual search. Methods Forty patients (mean age: 67 [SD: 9] years) with a range of glaucomatous visual field (VF) defects in both eyes (mean best eye mean deviation [MD]: –5.9 (SD: 5.4) dB) and 40 age-related people with normal vision (mean age: 66 [SD: 10] years) were timed as they searched for a series of target objects in computer displayed photographs of real world scenes. Eye movements were simultaneously recorded using an eye tracker. Average number of saccades per second, average saccade amplitude and average search duration across trials were recorded. These response variables were compared with measurements of VF and contrast sensitivity. Results The average rate of saccades made by the patient group was significantly smaller than the number made by controls during the visual search task (P = 0.02; mean reduction of 5.6% (95% CI: 0.1 to 10.4%). There was no difference in average saccade amplitude between the patients and the controls (P = 0.09). Average number of saccades was weakly correlated with aspects of visual function, with patients with worse contrast sensitivity (PR logCS; Spearman’s rho: 0.42; P = 0.006) and more severe VF defects (best eye MD; Spearman’s rho: 0.34; P = 0.037) tending to make less eye movements during the task. Average detection time in the search task was associated with the average rate of saccades in the patient group (Spearman’s rho = −0.65; P < 0.001) but this was not apparent in the controls. Conclusions The average rate of saccades made during visual search by this group of patients was fewer than those made by people with normal vision of a similar average age. There was wide

  2. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  3. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  4. Visual pattern discovery in timed event data

    NASA Astrophysics Data System (ADS)

    Schaefer, Matthias; Wanner, Franz; Mansmann, Florian; Scheible, Christian; Stennett, Verity; Hasselrot, Anders T.; Keim, Daniel A.

    2011-01-01

    Business processes have tremendously changed the way large companies conduct their business: The integration of information systems into the workflows of their employees ensures a high service level and thus high customer satisfaction. One core aspect of business process engineering are events that steer the workflows and trigger internal processes. Strict requirements on interval-scaled temporal patterns, which are common in time series, are thereby released through the ordinal character of such events. It is this additional degree of freedom that opens unexplored possibilities for visualizing event data. In this paper, we present a flexible and novel system to find significant events, event clusters and event patterns. Each event is represented as a small rectangle, which is colored according to categorical, ordinal or intervalscaled metadata. Depending on the analysis task, different layout functions are used to highlight either the ordinal character of the data or temporal correlations. The system has built-in features for ordering customers or event groups according to the similarity of their event sequences, temporal gap alignment and stacking of co-occurring events. Two characteristically different case studies dealing with business process events and news articles demonstrate the capabilities of our system to explore event data.

  5. Crowded visual search in children with normal vision and children with visual impairment.

    PubMed

    Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke

    2014-03-01

    This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search.

  6. Mining visual collocation patterns via self-supervised subspace learning.

    PubMed

    Yuan, Junsong; Wu, Ying

    2012-04-01

    Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.

  7. Time course of target recognition in visual search.

    PubMed

    Kotowicz, Andreas; Rutishauser, Ueli; Koch, Christof

    2010-01-01

    VISUAL SEARCH IS A UBIQUITOUS TASK OF GREAT IMPORTANCE: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation ( approximately 170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy.

  8. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  9. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  10. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  11. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.

  12. Feedback strategies for visual search in airframe structural inspection.

    PubMed

    Gramopadhye, A K; Drury, C G; Sharit, J

    1997-05-01

    Feedback of information has consistently shown positive results in human inspection, provided it is given in a timely and appropriate manner. Feedback serves as the basis of most training schemes; traditionally this has been performance feedback. Other forms of feedback which provide strategy information rather than performance information may have a role in improving inspection. This study compared performance feedback and cognitive feedback in a realistic simulation of an aircraft structural inspection task. Performance (time, errors) feedback showed the greatest improvements in performance measures. Cognitive feedback enhanced efficiency measures of search strategy. When cognitive feedback consisted of visual representations of the path and the coverage of the search sequence, subjects also were able to use this task information to improve their search performance.

  13. Action Properties of Object Images Facilitate Visual Search.

    PubMed

    Gomez, Michael A; Snow, Jacqueline C

    2017-03-06

    There is mounting evidence that constraints from action can influence the early stages of object selection, even in the absence of any explicit preparation for action. Here, we examined whether action properties of images can influence visual search, and whether such effects were modulated by hand preference. Observers searched for an oddball target among 3 distractors. The search arrays consisted either of images of graspable "handles" ("action-related" stimuli), or images that were otherwise identical to the handles but in which the semicircular fulcrum element was reoriented so that the stimuli no longer looked like graspable objects ("non-action-related" stimuli). In Experiment 1, right-handed observers, who have been shown previously to prefer to use the right hand over the left for manual tasks, were faster to detect targets in action-related versus non-action-related arrays, and showed a response time (reaction time [RT]) advantage for rightward- versus leftward-oriented action-related handles. In Experiment 2, left-handed observers, who have been shown to use the left and right hands relatively equally in manual tasks, were also faster to detect targets in the action-related versus non-action-related arrays, but RTs were equally fast for rightward- and leftward-oriented handle targets. Together, or results suggest that action properties in images, and constraints for action imposed by preferences for manual interaction with objects, can influence attentional selection in the context of visual search. (PsycINFO Database Record

  14. Automatic guidance of attention during real-world visual search.

    PubMed

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.

  15. Automatic guidance of attention during real-world visual search

    PubMed Central

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  16. Age mediation of frontoparietal activation during visual feature search.

    PubMed

    Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

    2014-11-15

    Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture.

  17. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters.

  18. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    PubMed

    Biggs, Adam T

    2017-03-28

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  19. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  20. Audio-Visual Object Search is Changed by Bilingual Experience

    PubMed Central

    Chabal, Sarah; Schroeder, Scott R.; Marian, Viorica

    2015-01-01

    The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye-movements revealed that this speed advantage was driven by bilinguals’ ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals’, but not monolinguals’, object-finding ability was positively associated with their executive control ability. We conclude that bilinguals’ executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. PMID:26272368

  1. Visual search and location probability learning from variable perspectives.

    PubMed

    Jiang, Yuhong V; Swallow, Khena M; Capistrano, Christian G

    2013-05-28

    Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different sides of the table from trial to trial, changing their perspective. The high-probability locations were stable on the tabletop but variable relative to the viewer. When participants were informed of the high-probability locations, search was faster when the target was in those locations, demonstrating probability cuing. However, in the absence of explicit instructions and awareness, participants failed to acquire an attentional bias toward the high-probability locations even when the search items were displayed over an invariant natural scene. Additional experiments showed that locomotion did not interfere with incidental learning, but the lack of a consistent perspective prevented participants from acquiring probability cuing incidentally. We conclude that spatial biases toward target-rich locations are directed by two mechanisms: incidental learning and goal-driven attention. Incidental learning codes attended locations in a viewer-centered reference frame and is not updated with viewer movement. Goal-driven attention can be deployed to prioritize an environment-rich region.

  2. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  3. How do interruptions impact nurses' visual scanning patterns when using barcode medication administration systems?

    PubMed

    He, Ze; Marquard, Jenna L; Henneman, Philip L

    2014-01-01

    While barcode medication administration (BCMA) systems have the potential to reduce medication errors, they may introduce errors, side effects, and hazards into the medication administration process. Studies of BCMA systems should therefore consider the interrelated nature of health information technology (IT) use and sociotechnical systems. We aimed to understand how the introduction of interruptions into the BCMA process impacts nurses' visual scanning patterns, a proxy for one component of cognitive processing. We used an eye tracker to record nurses' visual scanning patterns while administering a medication using BCMA. Nurses either performed the BCMA process in a controlled setting with no interruptions (n=25) or in a real clinical setting with interruptions (n=21). By comparing the visual scanning patterns between the two groups, we found that nurses in the interruptive environment identified less task-related information in a given period of time, and engaged in more information searching than information processing.

  4. How do Interruptions Impact Nurses’ Visual Scanning Patterns When Using Barcode Medication Administration Systems?

    PubMed Central

    He, Ze; Marquard, Jenna L.; Henneman, Philip L.

    2014-01-01

    While barcode medication administration (BCMA) systems have the potential to reduce medication errors, they may introduce errors, side effects, and hazards into the medication administration process. Studies of BCMA systems should therefore consider the interrelated nature of health information technology (IT) use and sociotechnical systems. We aimed to understand how the introduction of interruptions into the BCMA process impacts nurses’ visual scanning patterns, a proxy for one component of cognitive processing. We used an eye tracker to record nurses’ visual scanning patterns while administering a medication using BCMA. Nurses either performed the BCMA process in a controlled setting with no interruptions (n=25) or in a real clinical setting with interruptions (n=21). By comparing the visual scanning patterns between the two groups, we found that nurses in the interruptive environment identified less task-related information in a given period of time, and engaged in more information searching than information processing. PMID:25954449

  5. Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2009-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

  6. Distinct anatomy for visual search and bisection: a neuroimaging study

    PubMed Central

    Revill, Kathleen Pirog; Karnath, Hans-Otto; Rorden, Christopher

    2011-01-01

    Individuals with spatial neglect following brain injury often show biased performance on landmark bisection tasks (judging if a single item is transected at its midpoint) and search tasks (where they seek target(s) from an array of items). Interestingly, it appears that bisection deficits dissociate from other measures of neglect (including search tasks), and neglect patients with bisection deficits typically have more posterior injury than those without these symptoms. While previous studies in healthy adults have examined each of these tasks independently, our aim was to directly contrast brain activity between these two tasks. Our design used displays that were interpreted as landmark bisection stimuli in some blocks of trials and as search arrays on other trials. Therefore, we used a design where low-level perceptual and motor responses were identical across tasks. Both tasks generated significant activity in bilateral midfusiform gyrus, largely right lateralized activity in the posterior parietal cortex, left lateralized activity in the left motor cortex (consistent with right handed response) and generally right lateralized insular activation. Several brain areas showed task-selective activations when the two tasks were directly compared. Specifically, the superior parietal cortex was selectively activated during the landmark task. On the other hand, the search task caused stronger bilateral activation in the anterior insula, along with midfusiform gyrus, medial superior frontal areas, thalamus and right putamen. This work demonstrates that healthy adults show an anatomical dissociation for visual search and bisection behavior similar to that reported in neurological patients, and provides coordinates for future brain stimulation studies. PMID:21586329

  7. Searching for life motion signals. Visual search asymmetry in local but not global biological-motion processing.

    PubMed

    Wang, Li; Zhang, Kan; He, Sheng; Jiang, Yi

    2010-08-01

    The visual search paradigm has been widely used to study the mechanisms underlying visual attention, and search asymmetry provides a source of insight into preattentive visual features. In the current study, we tested visual search with biological-motion stimuli that were spatially scrambled or that represented feet only and found that observers were more efficient in searching for an upright target among inverted distractors than in searching for an inverted target among upright distractors. This suggests that local biological-motion signals can act as a basic preattentive feature for the human visual system. The search asymmetry disappeared when the global configuration in biological motion was kept intact, which indicates that the attentional effects arising from biological features (e.g., local motion signals) and global novelty (e.g., inverted human figure) can interact and modulate visual search. Our findings provide strong evidence that local biological motion can be processed independently of global configuration and shed new light on the mechanisms of visual search asymmetry.

  8. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  9. Electroencephalogram assessment of mental fatigue in visual search.

    PubMed

    Fan, Xiaoli; Zhou, Qianxiang; Liu, Zhongqi; Xie, Fang

    2015-01-01

    Mental fatigue is considered to be a contributing factor responsible for numerous road accidents and various medical conditions and the efficiency and performance could be impaired during fatigue. Hence, determining how to evaluate mental fatigue is very important. In the present study, ten subjects performed a long-term visual search task with electroencephalogram recorded, and self-assessment and reaction time (RT) were combined to verify if mental fatigue had been induced and were also used as confirmatory tests for the proposed measures. The changes in relative energy in four wavebands (δ,θ,α, and β), four ratio formulas [(α+θ)/β,α/β,(α+θ)/(α+β), and θ/β], and Shannon's entropy (SE) were compared and analyzed between the beginning and end of the task. The results showed that a significant increase occurred in alpha activity in the frontal, central, posterior temporal, parietal, and occipital lobes, and a dip occurred in the beta activity in the pre-frontal, inferior frontal, posterior temporal, and occipital lobes. The ratio formulas clearly increased in all of these brain regions except the temporal region, where only α/β changed obviously after finishing the 60-min visual search task. SE significantly increased in the posterior temporal, parietal, and occipital lobes. These results demonstrate some potential indicators for mental fatigue detection and evaluation, which can be applied in the future development of countermeasures to fatigue.

  10. Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing

    ERIC Educational Resources Information Center

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-01-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…

  11. Retinotopically specific reorganization of visual cortex for tactile pattern recognition

    PubMed Central

    Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.

    2009-01-01

    Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999

  12. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    PubMed

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration.

  13. Patterns of strong coupling for LHC searches

    NASA Astrophysics Data System (ADS)

    Liu, Da; Pomarol, Alex; Rattazzi, Riccardo; Riva, Francesco

    2016-11-01

    Even though the Standard Model (SM) is weakly coupled at the Fermi scale, a new strong dynamics involving its degrees of freedom may conceivably lurk at slightly higher energies, in the multi TeV range. Approximate symmetries provide a structurally robust context where, within the low energy description, the dimensionless SM couplings are weak, while the new strong dynamics manifests itself exclusively through higher-derivative interactions. We present an exhaustive classification of such scenarios in the form of effective field theories, paying special attention to new classes of models where the strong dynamics involves, along with the Higgs boson, the SM gauge bosons and/or the fermions. The IR softness of the new dynamics suppresses its effects at LEP energies, but deviations are in principle detectable at the LHC, even at energies below the threshold for production of new states. We believe our construction provides the so far unique structurally robust context where to motivate several LHC searches in Higgs physics, diboson production, or W W scattering. Perhaps surprisingly, the interplay between weak coupling, strong coupling and derivatives, which is controlled by symmetries, can override the naive expansion in operator dimension, providing instances where dimension-8 dominates dimension-6, well within the domain of validity of the low energy effective theory. This result reveals the limitations of an analysis that is both ambitiously general and restricted to dimension-6 operators.

  14. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  15. Visualizing Motion Patterns in Acupuncture Manipulation.

    PubMed

    Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung

    2016-07-16

    Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system.

  16. Visual-search observers for SPECT simulations with clinical backgrounds

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2016-03-01

    The purpose of this work was to test the ability of visual-search (VS) model observers to predict the lesion- detection performance of human observers with hybrid SPECT images. These images consist of clinical back- grounds with simulated abnormalities. The application of existing scanning model observers to hybrid images is complicated by the need for extensive statistical information, whereas VS models based on separate search and analysis processes may operate with reduced knowledge. A localization ROC (LROC) study involved the detection and localization of solitary pulmonary nodules in Tc-99m lung images. The study was aimed at op- timizing the number of iterations and the postfiltering of four rescaled block-iterative reconstruction strategies. These strategies implemented different combinations of attenuation correction, scatter correction, and detector resolution correction. For a VS observer in this study, the search and analysis processes were guided by a single set of base morphological features derived from knowledge of the lesion profile. One base set used difference-of- Gaussian channels while a second base set implemented spatial derivatives in combination with the Burgess eye filter. A feature-adaptive VS observer selected features of interest for a given image set on the basis of training-set performance. A comparison of the feature-adaptive observer results against previously acquired human-observer data is presented.

  17. Perceptual similarity in visual search for multiple targets.

    PubMed

    Gorbunova, Elena S

    2017-02-01

    Visual search for multiple targets can cause errors called subsequent search misses (SSM) - a decrease in accuracy at detecting a second target after a first target has been found. One of the possible explanations of SSM errors is perceptual set. After the first target has been found, the subjects become biased to find perceptually similar targets, therefore they are more likely to find perceptually similar targets and less likely to find the targets that are perceptually dissimilar. This study investigated the role of perceptual similarity in SSM errors. The search array in each trial consisted of 20 stimuli (ellipses and crosses, black and white, small and big, oriented horizontally and vertically), which could contain one, two or no targets. In case of two targets, the targets could have two, three or four shared features (in the last case the targets were identical). The error rate decreased with increasing the similarity between the targets. These results state the role of perceptual similarity and have implications for the perceptual set theory.

  18. Early visual symptom patterns in inherited retinal dystrophies.

    PubMed

    Prokofyeva, Elena; Troeger, Eric; Wilke, Robert; Zrenner, Eberhart

    2011-01-01

    The present retrospective study compared initial visual symptom patterns in inherited retinal dystrophies (IRD) on the basis of records of 544 patients diagnosed with a wide variety of IRD at the Tuebingen University Eye Hospital from 2005 to 2008. Age at first onset of symptoms was noted, and the following clinical data were analyzed: visual acuity (VA), night vision disturbances, photophobia, onset of visual field defects, best corrected VA, and types of visual field defects. Median age at visual symptom onset was defined with 25th and 75th percentiles and compared in 15 IRD types. The main trends in VA changes in retinitis pigmentosa and cone-rod dystrophies were identified. This study was the first to combine disease history and clinical data analysis in such a wide variety of IRD. It showed that patterns of initial symptoms in IRD can provide extra clues for early differential diagnosis and inclusion of IRD patients in clinical trials.

  19. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  20. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  1. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  2. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order

  3. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets

    PubMed Central

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2010-01-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual

  4. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  5. Electrophysiological measurement of information flow during visual search.

    PubMed

    Cosman, Joshua D; Arita, Jason T; Ianni, Julianna D; Woodman, Geoffrey F

    2016-04-01

    The temporal relationship between different stages of cognitive processing is long debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that, when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy, response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time.

  6. Functional Connectivity Patterns of Visual Cortex Reflect its Anatomical Organization.

    PubMed

    Genç, Erhan; Schölvinck, Marieke Louise; Bergmann, Johanna; Singer, Wolf; Kohler, Axel

    2016-09-01

    The brain is continuously active, even without external input or task demands. This so-called resting-state activity exhibits a highly specific spatio-temporal organization. However, how exactly these activity patterns map onto the anatomical and functional architecture of the brain is still unclear. We addressed this question in the human visual cortex. We determined the representation of the visual field in visual cortical areas of 44 subjects using fMRI and examined resting-state correlations between these areas along the visual hierarchy, their dorsal and ventral segments, and between subregions representing foveal versus peripheral parts of the visual field. We found that retinotopically corresponding regions, particularly those representing peripheral visual fields, exhibit strong correlations. V1 displayed strong internal correlations between its dorsal and ventral segments and the highest correlation with LGN compared with other visual areas. In contrast, V2 and V3 showed weaker correlations with LGN and stronger between-area correlations, as well as with V4 and hMT+. Interhemispheric correlations between homologous areas were especially strong. These correlation patterns were robust over time and only marginally altered under task conditions. These results indicate that resting-state fMRI activity closely reflects the anatomical organization of the visual cortex both with respect to retinotopy and hierarchy.

  7. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.

  8. The effect of spectrally selective filters on visual search performance.

    PubMed

    Chisum, G T; Sheehy, J B; Morway, P E; Askew, G K

    1987-05-01

    The effect of five spectrally selective filters on the performance of an acuity-dependent visual search task was evaluated. The filters were: A) a neutral density filter (control condition); B) a 5200A green interference filter; C) a 3215-250 red filter; D) a neodymium visor; and E) a holographic visor. The observers were presented with 5 blocks of 10 slides per filter. Each slide projected a 6 degrees X 6 degrees field of 900 letter O's--each 10' of arc--which contained a single Landolt C. The observers were required to find the C and indicate the position of the opening in the C. The opening in the C subtended 2.64' corresponding to an acuity of 0.38. Response time, error rate, accommodative accuracy, and the number and duration of fixations were recorded for each slide presentation. The results demonstrated that filter type had no effect on any of the response measures. During the first three trial blocks, the observers appeared to optimize their search strategies, after which they began to revert to their initial performance levels. However, this effect was not supported statistically.

  9. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  10. Learning optimal features for visual pattern recognition

    NASA Astrophysics Data System (ADS)

    Labusch, Kai; Siewert, Udo; Martinetz, Thomas; Barth, Erhardt

    2007-02-01

    The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by verifiable performance measures (iii) analyse the classification performance on images of handwritten digits (MNIST data base). We first employ the SPARSENET algorithm (Olshausen, 1998) to derive a local filter basis (on 13 × 13 pixels windows). We then filter the images in the database (28 × 28 pixels images of digits) and reduce the dimensionality of the resulting feature space by selecting the locally maximal filter responses. We then train a support vector machine on a training set to classify the digits and report results obtained on a separate test set. Currently, the best state-of-the-art result on the MNIST data base has an error rate of 0,4%. This result, however, has been obtained by using explicit knowledge that is specific to the data (elastic distortion model for digits). We here obtain an error rate of 0,55% which is second best but does not use explicit data specific knowledge. In particular it outperforms by far all methods that do not use data-specific knowledge.

  11. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  12. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  13. Neural signatures of adaptive post-error adjustments in visual search.

    PubMed

    Steinhauser, Robert; Maier, Martin E; Steinhauser, Marco

    2017-02-22

    Errors in speeded choice tasks can lead to post-error adjustments both on the behavioral and on the neural level. There is an ongoing debate whether such adjustments result from adaptive processes that serve to optimize performance or whether they reflect interference from error monitoring or attentional orientation. The present study aimed at identifying adaptive adjustments in a two-stage visual search task, in which participants had to select and subsequently identify a target stimulus presented to the left or right visual hemifield. Target selection and identification can be measured by two distinct event-related potentials, the N2pc and the SPCN. Using a decoder analysis based on multivariate pattern analysis, we were able to isolate the processing stages related to error sources and post-error adjustments. Whereas errors were linked to deviations in the N2pc and the SPCN, only for the N2pc we identified a post-error adjustment, which exhibits key features of source-specific adaptivity. While errors were associated with an increased N2pc, post-error adjustments consisted in an N2pc decrease. We interpret this as an adaptive adjustment of target selection to prevent errors due to disproportionate processing of the task-irrelevant target location. Our study thus provides evidence for adaptive post-error adjustments in visual search.

  14. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  15. The role of object categories in hybrid visual and memory search.

    PubMed

    Cunningham, Corbin A; Wolfe, Jeremy M

    2014-08-01

    In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches.

  16. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.

  17. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  18. Electrophysiological indices of target and distractor processing in visual search.

    PubMed

    Hickey, Clayton; Di Lollo, Vincent; McDonald, John J

    2009-04-01

    Attentional selection of a target presented among distractors can be indexed with an event-related potential (ERP) component known as the N2pc. Theoretical interpretation of the N2pc has suggested that it reflects a fundamental mechanism of attention that shelters the cortical representation of targets by suppressing neural activity stemming from distractors. Results from fields other than human electrophysiology, however, suggest that attention does not act solely through distractor suppression; rather, it modulates the processing of both target and distractors. We conducted four ERP experiments designed to investigate whether the N2pc reflects multiple attentional mechanisms. Our goal was to reconcile ostensibly conflicting outcomes obtained in electrophysiological studies of attention with those obtained using other methodologies. Participants viewed visual search arrays containing one target and one distractor. In Experiments 1 through 3, the distractor was isoluminant with the background, and therefore, did not elicit early lateralized ERP activity. This work revealed a novel contralateral ERP component that appears to reflect direct suppression of the cortical representation of the distractor. We accordingly name this component the distractor positivity (P(D)). In Experiment 4, an ERP component associated with target processing was additionally isolated. We refer to this component as the target negativity (N(T)). We believe that the N2pc reflects the summation of the P(D) and N(T), and that these discrete components may have been confounded in earlier electrophysiological studies. Overall, this study demonstrates that attention acts on both target and distractor representations, and that this can be indexed in the visual ERP.

  19. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  20. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  1. Using Pattern Search Methods for Surface Structure Determinationof Nanomaterials

    SciTech Connect

    Zhao, Zhengji; Meza, Juan; Van Hove, Michel

    2006-06-09

    Atomic scale surface structure plays an important roleindescribing many properties of materials, especially in the case ofnanomaterials. One of the most effective techniques for surface structuredetermination is low-energy electron diffraction (LEED), which can beused in conjunction with optimization to fit simulated LEED intensitiesto experimental data. This optimization problem has a number ofcharacteristics that make it challenging: it has many local minima, theoptimization variables can be either continuous or categorical, theobjective function can be discontinuous, there are no exact analyticderivatives (and no derivatives at all for categorical variables), andfunction evaluations are expensive. In this study, we show how to apply aparticular class of optimization methods known as pattern search methodsto address these challenges. These methods donot explicitly usederivatives, and are particularly appropriate when categorical variablesare present, an important feature that has not been addressed in previousLEED studies. We have found that pattern search methods can produceexcellent results, compared to previously used methods, both in terms ofperformance and locating optimal results.

  2. Augmenting Visual Search Performance with Transcranial Direct Current Stimulation (tDCS)

    DTIC Science & Technology

    2015-03-01

    7.0 REFERENCES Klingner, J., Tversky, B., & Hanrahan, P. “Effects of visual and verbal presentation on cognitive load in vigilance, memory , and...AFRL-RH-WP-TR-2015-0013 Augmenting Visual Search Performance with transcranial Direct Current Stimulation (tDCS) Justin Nelson...To) 31-03-15 Interim 7 Oct 2013- 27 Feb 2015 4. TITLE AND SUBTITLE Augmenting Visual Search Performance with transcranial Direct Current

  3. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  4. The effects of visual search efficiency on object-based attention.

    PubMed

    Greenberg, Adam S; Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2015-07-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41-51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161-177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention.

  5. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  6. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  7. Electrophysiological measurement of information flow during visual search

    PubMed Central

    Cosman, Joshua D.; Arita, Jason T.; Ianni, Julianna D.; Woodman, Geoffrey F.

    2016-01-01

    The temporal relationship between different stages of cognitive processing is long-debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  8. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  9. Does Expectation of Abnormality Affect the Search Pattern of Radiologists When Looking for Pulmonary Nodules?

    PubMed

    Littlefair, Stephen; Brennan, Patrick; Reed, Warren; Mello-Thoms, Claudia

    2017-02-01

    This experiment investigated whether there might be an effect on the visual search strategy of radiologists during image interpretation of the same adult chest radiographs when given different clinical information. Each of 17 experienced radiologists was asked to interpret a set of 57 (10 abnormal) posteroanterior chest images to identify the presence of pulmonary lesions using differing clinical information (leading to unknown, low and high expectations of prevalence). Eye position metrics (search time, dwell time and time to first fixation) were compared for normal and abnormal images, as well as between conditions. For all images, there was a significantly longer search time at high prevalence expectation compared to low prevalence expectation (W = 75.19, P = <0.0001). Mann-Whitney analysis of the abnormal images demonstrated that the dwell time on correctly identified lesions was significantly shorter at low prevalence expectation compared to both unknown (U = 364.5, P = 0.02) and high prevalence expectation (U = 397.0, P = 0.0002). Visual search patterns of radiologists appear to be affected by changing a priori information where such information fosters an expectation of abnormality.

  10. Bicycle accidents and drivers' visual search at left and right turns.

    PubMed

    Summala, H; Pasanen, E; Räsänen, M; Sievänen, J

    1996-03-01

    The accident data base of the City of Helsinki shows that when drivers cross a cycle path as they enter a non-signalized intersection, the clearly dominant type of car-cycle crashes is that in which a cyclist comes from the right and the driver is turning right, in marked contrast to the cases with drivers turning left (Pasanen 1992; City of Helsinki, Traffic Planning Department, Report L4). This study first tested an explanation that drivers turning right simply focus their attention on the cars coming from the left-those coming from the right posing no threat to them-and fail to see the cyclist from the right early enough. Drivers' scanning behavior was studied at two T-intersections. Two well-hidden video cameras were used, one to measure the head movements of the approaching drivers and the other one to measure speed and distance from the cycle crossroad. The results supported the hypothesis: the drivers turning right scanned the right leg of the T-intersection less frequently and later than those turning left. Thus, it appears that drivers develop a visual scanning strategy which concentrates on detection of more frequent and major dangers but ignores and may even mask visual information on less frequent dangers. The second part of the study evaluated different countermeasures, including speed humps, in terms of drivers' visual search behavior. The results suggested that speed-reducing countermeasures changed drivers' visual search patterns in favor of the cyclists coming from the right, presumably at least in part due to the fact that drivers were simply provided with more time to focus on each direction.

  11. Visual-auditory integration for visual search: a behavioral study in barn owls.

    PubMed

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target.

  12. Retinal waves coordinate patterned activity throughout the developing visual system

    PubMed Central

    Ackman, James B.; Burbridge, Timothy J.; Crair, Michael C.

    2014-01-01

    Summary The morphologic and functional development of the vertebrate nervous system is initially governed by genetic factors and subsequently refined by neuronal activity. However, fundamental features of the nervous system emerge before sensory experience is possible. Thus, activity-dependent development occurring before the onset of experience must be driven by spontaneous activity, but the origin and nature of activity in vivo remains largely untested. Here we use optical methods to demonstrate in live neonatal mice that waves of spontaneous retinal activity are present and propagate throughout the entire visual system before eye opening. This patterned activity encompassed the visual field, relied on cholinergic neurotransmission, preferentially initiated in the binocular retina, and exhibited spatiotemporal correlations between the two hemispheres. Retinal waves were the primary source of activity in the midbrain and primary visual cortex, but only modulated ongoing activity in secondary visual areas. Thus, spontaneous retinal activity is transmitted through the entire visual system and carries patterned information capable of guiding the activity-dependent development of complex intra- and inter- hemispheric circuits before the onset of vision. PMID:23060192

  13. Retinal waves coordinate patterned activity throughout the developing visual system.

    PubMed

    Ackman, James B; Burbridge, Timothy J; Crair, Michael C

    2012-10-11

    The morphological and functional development of the vertebrate nervous system is initially governed by genetic factors and subsequently refined by neuronal activity. However, fundamental features of the nervous system emerge before sensory experience is possible. Thus, activity-dependent development occurring before the onset of experience must be driven by spontaneous activity, but the origin and nature of activity in vivo remains largely untested. Here we use optical methods to show in live neonatal mice that waves of spontaneous retinal activity are present and propagate throughout the entire visual system before eye opening. This patterned activity encompassed the visual field, relied on cholinergic neurotransmission, preferentially initiated in the binocular retina and exhibited spatiotemporal correlations between the two hemispheres. Retinal waves were the primary source of activity in the midbrain and primary visual cortex, but only modulated ongoing activity in secondary visual areas. Thus, spontaneous retinal activity is transmitted through the entire visual system and carries patterned information capable of guiding the activity-dependent development of complex intra- and inter-hemispheric circuits before the onset of vision.

  14. Laterality patterns and visual-motor coordination of children.

    PubMed

    Iteya, M; Gabbard, C

    1996-08-01

    This study examined the association between laterality patterns of eye-hand and eye-foot described as congruent or cross-lateral, and visual-motor coordination skill (target throwing and kicking) by 606 4- to 6-yr.-olds. Speculation derived from contemporary reports of hand preference and motor coordination provided the hypothesis that persons exhibiting congruent patterns of eye and limb laterality such as right-eye and hand or right-eye and foot pattern would perform better than peers who exhibited other laterality patterns. To the contrary, this study yielded no significant differences in motor performance between groups with different patterns of preference. In view of past studies and present results, additional inquiry seems warranted before any consensus regarding the association between laterality and motor coordination can be established.

  15. The colorful brain: visualization of EEG background patterns.

    PubMed

    van Putten, Michel J A M

    2008-04-01

    This article presents a method to transform routine clinical EEG recordings to an alternative visual domain. The method is intended to support the classic visual interpretation of the EEG background pattern and to facilitate communication about relevant EEG characteristics. In addition, it provides various quantitative features. The EEG features used in the transformation include color-coded time-frequency representations of two novel symmetry measures and a synchronization measure, based on a nearest-neighbor coherence estimate. This triplet captures three highly relevant aspects of the dynamics of the EEG background pattern, which correlate strongly with various neurologic conditions. In particular, it quantifies and visualizes the spatiotemporal distribution of the EEG power in the anterioposterior and lateral direction, and the short-distance coherence. The potential clinical use is illustrated by application of the proposed technique to various normal and abnormal EEGs, including seizure activity and the transition to sleep. The proposed transformation visualizes various essential elements of EEG background patterns. Quantitative analysis of clinical EEG recordings and transformation to alternative domains assists in the interpretation and contributes to an objective interpretation.

  16. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  17. Visual search in typically developing toddlers and toddlers with Fragile X or Williams syndrome.

    PubMed

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-02-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between successively touched items, accuracy and error types revealed changes in 2- and 3-year-olds' vulnerability to manipulations of the search display. Experiment 2 examined search performance by toddlers with Fragile X syndrome (FXS) or Williams syndrome (WS). Both of these groups produced equivalent mean time and distance per touch as typically developing toddlers matched by chronological or mental age; but both produced a larger number of errors. Toddlers with WS confused distractors with targets more than the other groups; while toddlers with FXS perseverated on previously found targets. These findings provide information on how visual search typically develops in toddlers, and reveal distinct search deficits for atypically developing toddlers.

  18. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  19. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  20. The role of prediction in perception: Evidence from interrupted visual search.

    PubMed

    Mereu, Stefania; Zacks, Jeffrey M; Kurby, Christopher A; Lleras, Alejandro

    2014-08-01

    Recent studies of rapid resumption-an observer's ability to quickly resume a visual search after an interruption-suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur.

  1. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  2. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  3. Timing of speech and display affects the linguistic mediation of visual search.

    PubMed

    Chiu, Eric M; Spivey, Michael J

    2014-01-01

    Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension.

  4. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  5. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  6. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  7. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  8. From Salience to Saccades: Multiple-Alternative Gated Stochastic Accumulator Model of Visual Search

    PubMed Central

    Purcell, Braden A.; Schall, Jeffrey D.; Logan, Gordon D.; Palmeri, Thomas J.

    2012-01-01

    We describe a stochastic accumulator model demonstrating that visual search performance can be understood as a gated feedforward cascade from a salience map to multiple competing accumulators. The model quantitatively accounts for behavior and predicts neural dynamics of macaque monkeys performing visual search for a target stimulus among different numbers of distractors. The salience accumulated in the model is equated with the spike trains recorded from visually responsive neurons in the frontal eye field. Accumulated variability in the firing rates of these neurons explains choice probabilities and the distributions of correct and error response times with search arrays of different set sizes if the accumulators are mutually inhibitory. The dynamics of the stochastic accumulators quantitatively predict the activity of presaccadic movement neurons that initiate eye movements if gating inhibition prevents accumulation before the representation of stimulus salience emerges. Adjustments in the level of gating inhibition can control trade-offs in speed and accuracy that optimize visual search performance. PMID:22399766

  9. How visual edge features influence cuttlefish camouflage patterning.

    PubMed

    Chiao, Chuan-Chin; Ulmer, Kimberly M; Siemann, Liese A; Buresch, Kendra C; Chubb, Charles; Hanlon, Roger T

    2013-05-03

    Rapid adaptive camouflage is the primary defense of soft-bodied cuttlefish. Previous studies have shown that cuttlefish body patterns are strongly influenced by visual edges in the substrate. The aim of the present study was to examine how cuttlefish body patterning is differentially controlled by various aspects of edges, including contrast polarity, contrast strength, and the presence or absence of "line terminators" introduced into a pattern when continuous edges are fragmented. Spatially high- and low-pass filtered white or black disks, as well as isolated, continuous and fragmented edges varying in contrast, were used to assess activation of cuttlefish skin components. Although disks of both contrast polarities evoked relatively weak disruptive body patterns, black disks activated different skin components than white disks, and high-frequency information alone sufficed to drive the responses to white disks whereas high- and low-frequency information were both required to drive responses to black disks. Strikingly, high-contrast edge fragments evoked substantially stronger body pattern responses than low-contrast edge fragments, whereas the body pattern responses evoked by high-contrast continuous edges were no stronger than those produced by low-contrast edges. This suggests that line terminators vs. continuous edges influence expression of disruptive body pattern components via different mechanisms that are controlled by contrast in different ways.

  10. Spontaneous pattern formation and pinning in the visual cortex

    NASA Astrophysics Data System (ADS)

    Baker, Tanya I.

    Bifurcation theory and perturbation theory can be combined with a knowledge of the underlying circuitry of the visual cortex to produce an elegant story explaining the phenomenon of visual hallucinations. A key insight is the application of an important set of ideas concerning spontaneous pattern formation introduced by Turing in 1952. The basic mechanism is a diffusion driven linear instability favoring a particular wavelength that determines the size of the ensuing stripe or spot periodicity of the emerging spatial pattern. Competition between short range excitation and longer range inhibition in the connectivity profile of cortical neurons provides the difference in diffusion length scales necessary for the Turing mechanism to occur and has been proven by Ermentrout and Cowan to be sufficient to explain the generation of a subset of reported geometric hallucinations. Incorporating further details of the cortical circuitry, namely that neurons are also weakly connected to other neurons sharing a particular stimulus orientation or spatial frequency preference at even longer ranges and the resulting shift-twist symmetry of the neuronal connectivity, improves the story. We expand this approach in order to be able to include the tuned responses of cortical neurons to additional visual stimulus features such as motion, color and disparity. We apply a study of nonlinear dynamics similar to the analysis of wave propagation in a crystalline lattice to demonstrate how a spatial pattern formed through the Turing instability can be pinned to the geometric layout of various feature preferences. The perturbation analysis is analogous to solving the Schrodinger equation in a weak periodic potential. Competition between the local isotropic connections which produce patterns of activity via the Turing mechanism and the weaker patchy lateral connections that depend on a neuron's particular set of feature preferences create long wavelength affects analogous to commensurate

  11. Augmenting Visual Search Performance with Transcranial Direct Current Stimulation (tDCS)

    DTIC Science & Technology

    2015-09-28

    stimulation (tDCS) over the left frontal eye field (LFEF) region of the scalp to improve cognitive performance. The participants received anodal and...noninvasive brain stimulation, cognitive enhancement, visual search cognitive support, human performance, augmentation, ISR 16. SECURITY CLASSIFICATION OF...Z39.18 Military Psychology Augmenting Visual Search Performance With Transcranial Direct Current Stimulation (tDCS) Justin M. Nelson, R. Andy

  12. The price of information: Increased inspection costs reduce the confirmation bias in visual search.

    PubMed

    Rajsic, Jason; Wilson, Daryl E; Pratt, Jay

    2017-01-31

    In visual search, there is a confirmation bias such that attention is biased towards stimuli that match a target template, which has been attributed to covert costs of updating the templates that guide search [Rajsic, Wilson, & Pratt, 2015. Confirmation bias in visual search. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/xhp0000090 ]. In order to provide direct evidence for this speculation, the present study increased the cost of inspections in search by using gaze- and mouse-contingent searches, which restrict the manner in which information in search displays can be accrued, and incur additional motor costs (in the case of mouse-contingent searches). In a fourth experiment, we rhythmically mask elements in the search display to induce temporal inspection costs. Our results indicated that confirmation bias is indeed attenuated when inspection costs are increased. We conclude that confirmation bias results from the low-cost strategy of matching information to a single, concrete visual template, and that more sophisticated guidance strategies will be used when sufficiently beneficial. This demonstrates that search guidance itself comes at a cost, and that the form of guidance adopted in a given search depends on a comparison between guidance costs and the expected benefits of their implementation.

  13. Scanners and drillers: Characterizing expert visual search through volumetric images

    PubMed Central

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E.; Wolfe, Jeremy M.

    2013-01-01

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a “stack” of 2-D chest CT “slices.” At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: “drilling” and “scanning.” Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. PMID:23922445

  14. Response Selection Modulates Visual Search within and across Dimensions

    ERIC Educational Resources Information Center

    Mortier, Karen; Theeuwes, Jan; Starreveld, Peter

    2005-01-01

    In feature search tasks, uncertainty about the dimension on which targets differ from the nontargets hampers search performance relative to a situation in which this dimension is known in advance. Typically, these cross-dimensional costs are associated with less efficient guidance of attention to the target. In the present study, participants…

  15. Scanners and drillers: characterizing expert visual search through volumetric images.

    PubMed

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E; Wolfe, Jeremy M

    2013-08-06

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated.

  16. Human visual search behaviour is far from ideal.

    PubMed

    Nowakowska, Anna; Clarke, Alasdair D F; Hunt, Amelia R

    2017-02-22

    Evolutionary pressures have made foraging behaviours highly efficient in many species. Eye movements during search present a useful instance of foraging behaviour in humans. We tested the efficiency of eye movements during search using homogeneous and heterogeneous arrays of line segments. The search target is visible in the periphery on the homogeneous array, but requires central vision to be detected on the heterogeneous array. For a compound search array that is heterogeneous on one side and homogeneous on the other, eye movements should be directed only to the heterogeneous side. Instead, participants made many fixations on the homogeneous side. By comparing search of compound arrays to an estimate of search performance based on uniform arrays, we isolate two contributions to search inefficiency. First, participants make superfluous fixations, sacrificing speed for a perceived (but not actual) gain in response certainty. Second, participants fixate the homogeneous side even more frequently than predicted by inefficient search of uniform arrays, suggesting they also fail to direct fixations to locations that yield the most new information.

  17. Visualization of oxygen distribution patterns caused by coral and algae.

    PubMed

    Haas, Andreas F; Gregg, Allison K; Smith, Jennifer E; Abieri, Maria L; Hatay, Mark; Rohwer, Forest

    2013-01-01

    Planar optodes were used to visualize oxygen distribution patterns associated with a coral reef associated green algae (Chaetomorpha sp.) and a hermatypic coral (Favia sp.) separately, as standalone organisms, and placed in close proximity mimicking coral-algal interactions. Oxygen patterns were assessed in light and dark conditions and under varying flow regimes. The images show discrete high oxygen concentration regions above the organisms during lighted periods and low oxygen in the dark. Size and orientation of these areas were dependent on flow regime. For corals and algae in close proximity the 2D optodes show areas of extremely low oxygen concentration at the interaction interfaces under both dark (18.4 ± 7.7 µmol O2 L(- 1)) and daylight (97.9 ± 27.5 µmol O2 L(- 1)) conditions. These images present the first two-dimensional visualization of oxygen gradients generated by benthic reef algae and corals under varying flow conditions and provide a 2D depiction of previously observed hypoxic zones at coral algae interfaces. This approach allows for visualization of locally confined, distinctive alterations of oxygen concentrations facilitated by benthic organisms and provides compelling evidence for hypoxic conditions at coral-algae interaction zones.

  18. What are the shapes of response time distributions in visual search?

    PubMed

    Palmer, Evan M; Horowitz, Todd S; Torralba, Antonio; Wolfe, Jeremy M

    2011-02-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search.

  19. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  20. When do individuals with autism spectrum disorder show superiority in visual search?

    PubMed

    Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2016-11-29

    Although superior visual search skills have been repeatedly reported for individuals with autism spectrum disorder, the underlying mechanisms remain controversial. To specify the locus where individuals with autism spectrum disorder excel in visual search, we compared the performance of autism spectrum disorder adults and healthy controls in briefly presented search tasks, where the search display was replaced by a noise mask at a stimulus-mask asynchrony of 160 ms to interfere with a serial search process while bottom-up visual processing remains intact. We found that participants with autism spectrum disorder show faster overall reaction times regardless of the number of stimuli and the presence of a target with higher accuracy than controls in a luminance and shape conjunction search task as well as a hard feature search task where the target feature information was ineffective in prioritizing likely target stimuli. In addition, the analysis of target eccentricity illustrated that the autism spectrum disorder group has better target discriminability regardless of target eccentricity, suggesting that the autism spectrum disorder advantage does not derive from a reduced crowding effect, which is known to be enhanced with increasing retinal eccentricity. The findings suggest that individuals with autism spectrum disorder excel in non-search processes, especially in the simultaneous discrimination of multiple visual stimuli.

  1. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  2. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  3. Long-Term Priming of Visual Search Prevails against the Passage of Time and Counteracting Instructions

    ERIC Educational Resources Information Center

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Studies on "intertrial priming" have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change--so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of…

  4. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback.

  5. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  6. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  7. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  8. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  9. Auditory, tactile, and multisensory cues facilitate search for dynamic visual stimuli.

    PubMed

    Ngo, Mary Kim; Spence, Charles

    2010-08-01

    Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants' visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target's color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants' visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants' visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.

  10. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  11. Pigeons show efficient visual search by category: effects of typicality and practice.

    PubMed

    Ohkita, Midori; Jitsumori, Masako

    2012-11-01

    Three experiments investigated category search in pigeons, using an artificial category created by morphing of human faces. Four pigeons were trained to search for category members among nonmembers, with each target item consisting of an item-specific component and a common component diagnostic of the category. Experiment 1 found that search was more efficient with homogeneous than heterogeneous distractors. In Experiment 2, the pigeons successfully searched for target exemplars having novel item-specific components. Practice including these items enabled the pigeons to efficiently search for the highly familiar members. The efficient search transferred immediately to more typical novel exemplars in Experiment 3. With further practice, the pigeons eventually developed efficient search for individual less typical exemplars. Results are discussed in the context of visual search theories and automatic processing of individual exemplars.

  12. Use of an augmented-vision device for visual search by patients with tunnel vision

    PubMed Central

    Luo, Gang; Peli, Eli

    2006-01-01

    Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136

  13. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    DTIC Science & Technology

    2010-08-01

    the Model Mixed Density Task CVC Task 3. ANSWERING THE FOUR QUESTIONS OF ACTIVE VISION 3.1. When do the Eyes Move? Modeling Fixation...from two experiments: a mixed density search task and a CVC (consonant-vowel- consonant) search task. The mixed density experiment (Halverson & Hornof...2004b) investigated the effects of varying the visual density of elements in a structured layout. The CVC search experiment (Hornof, 2004

  14. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  15. Dynamic Modulation of Local Population Activity by Rhythm Phase in Human Occipital Cortex During a Visual Search Task

    PubMed Central

    Miller, Kai J.; Hermes, Dora; Honey, Christopher J.; Sharma, Mohit; Rao, Rajesh P. N.; den Nijs, Marcel; Fetz, Eberhard E.; Sejnowski, Terrence J.; Hebb, Adam O.; Ojemann, Jeffrey G.; Makeig, Scott; Leuthardt, Eric C.

    2010-01-01

    Brain rhythms are more than just passive phenomena in visual cortex. For the first time, we show that the physiology underlying brain rhythms actively suppresses and releases cortical areas on a second-to-second basis during visual processing. Furthermore, their influence is specific at the scale of individual gyri. We quantified the interaction between broadband spectral change and brain rhythms on a second-to-second basis in electrocorticographic (ECoG) measurement of brain surface potentials in five human subjects during a visual search task. Comparison of visual search epochs with a blank screen baseline revealed changes in the raw potential, the amplitude of rhythmic activity, and in the decoupled broadband spectral amplitude. We present new methods to characterize the intensity and preferred phase of coupling between broadband power and band-limited rhythms, and to estimate the magnitude of rhythm-to-broadband modulation on a trial-by-trial basis. These tools revealed numerous coupling motifs between the phase of low-frequency (δ, θ, α, β, and γ band) rhythms and the amplitude of broadband spectral change. In the θ and β ranges, the coupling of phase to broadband change is dynamic during visual processing, decreasing in some occipital areas and increasing in others, in a gyrally specific pattern. Finally, we demonstrate that the rhythms interact with one another across frequency ranges, and across cortical sites. PMID:21119778

  16. Adaptive two-scale edge detection for visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    2009-09-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvocellular (P) and magnocellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong nonlinear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information and gracefully shifts from small-scale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the P- and M-channels of the human visual system. We also examine the case of adapting to a third image condition-namely, too little visual information-and automatically adjust edge-detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  17. Performance of visual search tasks from various types of contour information.

    PubMed

    Itan, Liron; Yitzhaky, Yitzhak

    2013-03-01

    A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper.

  18. Computational assessment of visual search strategies in volumetric medical images

    PubMed Central

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M.; Haygood, Tamara Miner; Markey, Mia K.

    2016-01-01

    Abstract. When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: “drilling” (restrict eye movements to a small region of the image while quickly scrolling through slices), or “scanning” (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either “drilling” or “scanning” when searching for target T’s in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers’ gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers’ fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus “drilling” may be more efficient than “scanning.” PMID:26759815

  19. Out of sight, out of mind: Matching bias underlies confirmatory visual search.

    PubMed

    Rajsic, Jason; Taylor, J Eric T; Pratt, Jay

    2017-02-01

    Confirmation bias has recently been reported in visual search, where observers who were given a perceptual rule to test (e.g. "Is the p on a red circle?") search stimuli that could confirm the rule stimuli preferentially (Rajsic, Wilson, & Pratt, Journal of Experimental Psychology: Human Perception and Performance, 41(5), 1353-1364, 2015). In this study, we compared the ability of concrete and abstract visual templates to guide attention using the visual confirmation bias. Experiment 1 showed that confirmatory search tendencies do not result from simple low-level priming, as they occurred when color templates were verbally communicated. Experiment 2 showed that confirmation bias did not occur when targets needed to be reported as possessing or not possessing the absence of a feature (i.e., reporting whether a target was on a nonred circle). Experiment 3 showed that confirmatory search also did not occur when search prompts referred to a set of visually heterogenous features (i.e., reporting whether a target on a colorful circle, regardless of the color). Together, these results show that the confirmation bias likely results from a matching heuristic, such that visual codes involved in representing the search goal prioritize stimuli possessing these features.

  20. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record

  1. Effects of targets embedded within words in a visual search task.

    PubMed

    Grabbe, Jeremy W

    2014-01-01

    Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands).

  2. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.

  3. Spatial partitions systematize visual search and enhance target memory.

    PubMed

    Solman, Grayden J F; Kingstone, Alan

    2017-02-01

    Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.

  4. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  5. Theta burst stimulation improves overt visual search in spatial neglect independently of attentional load.

    PubMed

    Cazzoli, Dario; Rosenthal, Clive R; Kennard, Christopher; Zito, Giuseppe A; Hopfner, Simone; Müri, René M; Nyffeler, Thomas

    2015-12-01

    Visual neglect is considerably exacerbated by increases in visual attentional load. These detrimental effects of attentional load are hypothesised to be dependent on an interplay between dysfunctional inter-hemispheric inhibitory dynamics and load-related modulation of activity in cortical areas such as the posterior parietal cortex (PPC). Continuous Theta Burst Stimulation (cTBS) over the contralesional PPC reduces neglect severity. It is unknown, however, whether such positive effects also operate in the presence of the detrimental effects of heightened attentional load. Here, we examined the effects of cTBS on neglect severity in overt visual search (i.e., with eye movements), as a function of high and low visual attentional load conditions. Performance was assessed on the basis of target detection rates and eye movements, in a computerised visual search task and in two paper-pencil tasks. cTBS significantly ameliorated target detection performance, independently of attentional load. These ameliorative effects were significantly larger in the high than the low load condition, thereby equating target detection across both conditions. Eye movement analyses revealed that the improvements were mediated by a redeployment of visual fixations to the contralesional visual field. These findings represent a substantive advance, because cTBS led to an unprecedented amelioration of overt search efficiency that was independent of visual attentional load.

  6. Using visual analytics model for pattern matching in surveillance data

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  7. Dynamic Analysis and Pattern Visualization of Forest Fires

    PubMed Central

    Lopes, António M.; Tenreiro Machado, J. A.

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  8. Dynamic analysis and pattern visualization of forest fires.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.

  9. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search

    PubMed Central

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I.

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  10. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search.

    PubMed

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences.

  11. Gene prediction by pattern recognition and homology search

    SciTech Connect

    Xu, Y.; Uberbacher, E.C.

    1996-05-01

    This paper presents an algorithm for combining pattern recognition-based exon prediction and database homology search in gene model construction. The goal is to use homologous genes or partial genes existing in the database as reference models while constructing (multiple) gene models from exon candidates predicted by pattern recognition methods. A unified framework for gene modeling is used for genes ranging from situations with strong homology to no homology in the database. To maximally use the homology information available, the algorithm applies homology on three levels: (1) exon candidate evaluation, (2) gene-segment construction with a reference model, and (3) (complete) gene modeling. Preliminary testing has been done on the algorithm. Test results show that (a) perfect gene modeling can be expected when the initial exon predictions are reasonably good and a strong homology exists in the database; (b) homology (not necessarily strong) in general helps improve the accuracy of gene modeling; (c) multiple gene modeling becomes feasible when homology exists in the database for the involved genes.

  12. Supplementary eye field during visual search: Salience, cognitive control, and performance monitoring

    PubMed Central

    Purcell, Braden A.; Weigand, Pauline K.; Schall, Jeffrey D.

    2012-01-01

    How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFP) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only ~2% of SEF neurons and ~8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFP exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance. PMID:22836261

  13. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

  14. Searching the Visual Arts: An Analysis of Online Information Access.

    ERIC Educational Resources Information Center

    Brady, Darlene; Serban, William

    1981-01-01

    A search for stained glass bibliographic information using DIALINDEX identified 57 DIALOG files from a variety of subject categories and 646 citations as relevant. Files include applied science, biological sciences, chemistry, engineering, environment/pollution, people, business research, and public affairs. Eleven figures illustrate the search…

  15. Visualizing Document Classification: A Search Aid for the Digital Library.

    ERIC Educational Resources Information Center

    Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

    2000-01-01

    Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

  16. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  17. Searching for Meaning: Visual Culture from an Anthropological Perspective

    ERIC Educational Resources Information Center

    Stokrocki, Mary

    2006-01-01

    In this article, the author discusses the importance of Viktor Lowenfeld's influence on her research, describes visual anthropology, gives examples of her research, and examines the implications of this type of research for teachers. The author regards Lowenfeld's (1952/1939) early work with children in Austria as a form of participant observation…

  18. The Mechanisms Underlying the ASD Advantage in Visual Search

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik

    2016-01-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in "Neuron" 48:497-507, 2005; Simmons et al. in "Vis Res" 49:2705-2739, 2009). This…

  19. Dual-Target Cost in Visual Search for Multiple Unfamiliar Faces.

    PubMed

    Mestry, Natalie; Menneer, Tamaryn; Cave, Kyle R; Godwin, Hayward J; Donnelly, Nick

    2017-04-03

    The efficiency of visual search for one (single-target) and either of two (dual-target) unfamiliar faces was explored to understand the manifestations of capacity and guidance limitations in face search. The visual similarity of distractor faces to target faces was manipulated using morphing (Experiments 1 and 2) and multidimensional scaling (Experiment 3). A dual-target cost was found in all experiments, evidenced by slower and less accurate search in dual- than single-target conditions. The dual-target cost was unequal across the targets, with performance being maintained on one target and reduced on the other, which we label "preferred" and "non-preferred" respectively. We calculated the capacity for each target face and show reduced capacity for representing the non-preferred target face. However, results show that the capacity for the non-preferred target can be increased when the dual-target condition is conducted after participants complete the single-target conditions. Analyses of eye movements revealed evidence for weak guidance of fixations in single-target search, and when searching for the preferred target in dual-target search. Overall, the experiments show dual-target search for faces is capacity- and guidance-limited, leading to superior search for 1 face over the other in dual-target search. However, learning faces individually may improve capacity with the second face. (PsycINFO Database Record

  20. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-02-09

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  1. Visual pattern recognition network: its training algorithm and its optoelectronic architecture

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren

    1996-07-01

    A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition.

  2. Effects of display curvature, display zone, and task duration on legibility and visual fatigue during visual search task.

    PubMed

    Park, Sungryul; Choi, Donghee; Yi, Jihhyeon; Lee, Songil; Lee, Ja Eun; Choi, Byeonghwa; Lee, Seungbae; Kyung, Gyouhyung

    2017-04-01

    This study examined the effects of display curvature (400, 600, 1200 mm, and flat), display zone (5 zones), and task duration (15 and 30 min) on legibility and visual fatigue. Each participant completed two 15-min visual search task sets at each curvature setting. The 600-mm and 1200-mm settings yielded better results than the flat setting in terms of legibility and perceived visual fatigue. Relative to the corresponding centre zone, the outermost zones of the 1200-mm and flat settings showed a decrease of 8%-37% in legibility, whereas those of the flat setting showed an increase of 26%-45% in perceived visual fatigue. Across curvatures, legibility decreased by 2%-8%, whereas perceived visual fatigue increased by 22% during the second task set. The two task sets induced an increase of 102% in the eye complaint score and a decrease of 0.3 Hz in the critical fusion frequency, both of which indicated an increase in visual fatigue. In summary, a curvature of around 600 mm, central display zones, and frequent breaks are recommended to improve legibility and reduce visual fatigue.

  3. Color singleton pop-out does not always poop out: an alternative to visual search.

    PubMed

    Prinzmetal, William; Taylor, Nadia

    2006-08-01

    Folk psychology suggests that when an observer views a scene, a unique item will stand out and draw attention to itself. This belief stands in contrast to numerous studies in visual search that have found that a unique target item (e.g., a unique color) is not identified more quickly than a nonunique target. We hypothesized that this finding is the result of task demands of visual search, and that when the task does not involve visual search, uniqueness will pop out. We tested this hypothesis in a task in which observers were presented an array of letters and asked to respond aloud, as quickly as possible, with the identity of any one of the letters. The observers were significantly more likely to respond with a uniquely colored letter than would be expected by chance. In a task in which observers blurt out the first thing that they see, unique pop-out does not poop out.

  4. Conflicting effects of context in change detection and visual search: A dual process account.

    PubMed

    LaPointe, Mitchell R P; Milliken, Bruce

    2017-03-01

    Congruent contexts often facilitate performance in visual search and categorisation tasks using natural scenes. A congruent context is thought to contain predictive information about the types of objects likely to be encountered, as well as their location. However, in change detection tasks, changes embedded in congruent contexts often produce impaired performance relative to incongruent contexts. Using a stimulus set controlled for object perceptual salience, we compare performance across change detection and visual search tasks, as well as a hybrid of these 2 tasks. The results support a dual process account with opposing influences of context congruency on change detection and object identification processes, which contribute differentially to performance in visual search and change detection tasks. (PsycINFO Database Record

  5. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations.

    PubMed

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search.

  6. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations

    PubMed Central

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search. PMID:27867436

  7. The Effect of Stress on Crossmodal Interference During Visual Search

    DTIC Science & Technology

    2006-11-01

    1997; Rees, Firth, & Lavie , 2001), few have examined the effects of stress on crossmodal attention . The purpose of the present experiment was to...of both the high and low perceptual load . Conversely, if induced stress results in a narrowing of attention , then it is hypothesized that...distracting visual information will not interfere under conditions of high or low load . At first blush, one might assume that a broadening of attention may

  8. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  9. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  10. Temporal and peripheral extraction of contextual cues from scenes during visual search.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-02-01

    Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

  11. Looking and listening: A comparison of intertrial repetition effects in visual and auditory search tasks.

    PubMed

    Klein, Michael D; Stolz, Jennifer A

    2015-08-01

    Previous research shows that performance on pop-out search tasks is facilitated when the target and distractors repeat across trials compared to when they switch. This phenomenon has been shown for many different types of visual stimuli. We tested whether the effect would extend beyond visual stimuli to the auditory modality. Using a temporal search task that has previously been shown to elicit priming of pop-out with visual stimuli (Yashar & Lamy, Psychological Science, 21(2), 243-251, 2010), we showed that priming of pop-out does occur with auditory stimuli and has characteristics similar to those of an analogous visual task. These results suggest that either the same or similar mechanisms might underlie priming of pop-out in both modalities.

  12. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  13. Faster than the speed of rejection: Object identification processes during visual search for multiple targets

    PubMed Central

    Godwin, Hayward J.; Walenchok, Stephen C.; Houpt, Joseph W.; Hout, Michael C.; Goldinger, Stephen D.

    2015-01-01

    When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this “dual-target cost” has primarily focused on the breakdown of attention guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, in order to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally. PMID:25938253

  14. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  15. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  16. Self-Induced Attentional Blink: A Cause of Errors in Multiple-Target Visual Search

    DTIC Science & Technology

    2012-08-15

    found that an attentional blink can underlie SOS errors. Summary Visual search, looking for a target amongst distractors , is key to everyday...Participants completed a visual search task for target “T” shapes amongst distractor “L” shapes on a white background. Targets were either of high salience (57...65% black) or low salience (22–45%) while the majority of distractors were low salience (Figure 1A). There were 25 items (1.3° × 1.3°) in each

  17. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.

  18. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE

    PubMed Central

    2017-01-01

    Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be

  19. Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations

    NASA Astrophysics Data System (ADS)

    Bays, Hillary

    An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

  20. Transition of target-location signaling in activity of macaque lateral intraparietal neurons during delayed-response visual search.

    PubMed

    Nishida, Satoshi; Tanaka, Tomohiro; Ogawa, Tadashi

    2014-09-15

    Neurons in the lateral intraparietal area (LIP) are involved in signaling the location of behaviorally relevant objects during visual discrimination and working memory maintenance. Although previous studies have examined these cognitive processes separately, they often appear as inseparable sequential processes in real-life situations. Little is known about how the neural representation of the target location is altered when both cognitive processes are continuously required for executing a task. We investigated this issue by recording single-unit activity from LIP of monkeys performing a delayed-response visual search task in which they were required to discriminate the target from distractors in the stimulus period, remember the location at which the extinguished target had been presented in the delay period, and make a saccade to that location in the response period. Target-location signaling was assessed using response modulations contingent on whether the target location was inside or opposite the receptive field. Although the population-averaged response modulation was consistent and changed only slightly during a trial, the across-neuron pattern of response modulations showed a marked and abrupt change around 170 ms after stimulus offset due to concurrent changes in the response modulations of a subset of LIP neurons, which manifested heterogeneous patterns of activity changes during the task. Our findings suggest that target-location signaling by the across-neuron pattern of LIP activity discretely changes after the stimulus disappearance under conditions that continuously require visual discrimination and working memory to perform a single behavioral task.

  1. Reward association facilitates distractor suppression in human visual search.

    PubMed

    Gong, Mengyuan; Yang, Feitong; Li, Sheng

    2016-04-01

    Although valuable objects are attractive in nature, people often encounter situations where they would prefer to avoid such distraction while focusing on the task goal. Contrary to the typical effect of attentional capture by a reward-associated item, we provide evidence for a facilitation effect derived from the active suppression of a high reward-associated stimulus when cuing its identity as distractor before the display of search arrays. Selection of the target is shown to be significantly faster when the distractors were in high reward-associated colour than those in low reward-associated or non-rewarded colours. This behavioural reward effect was associated with two neural signatures before the onset of the search display: the increased frontal theta oscillation and the strengthened top-down modulation from frontal to anterior temporal regions. The former suggests an enhanced working memory representation for the reward-associated stimulus and the increased need for cognitive control to override Pavlovian bias, whereas the latter indicates that the boost of inhibitory control is realized through a frontal top-down mechanism. These results suggest a mechanism in which the enhanced working memory representation of a reward-associated feature is integrated with task demands to modify attentional priority during active distractor suppression and benefit behavioural performance.

  2. Emotional priming of pop-out in visual search.

    PubMed

    Lamy, Dominique; Amunts, Liana; Bar-Haim, Yair

    2008-04-01

    When searching for a discrepant target along a simple dimension such as color or shape, repetition of the target feature substantially speeds search, an effect known as feature priming of pop-out (V. Maljkovic and K. Nakayama, 1994). The authors present the first report of emotional priming of pop-out. Participants had to detect the face displaying a discrepant expression of emotion in an array of four face photographs. On each trial, the target when present was either a neutral face among emotional faces (angry in Experiment 1 or happy in Experiment 2), or an emotional face among neutral faces. Target detection was faster when the target displayed the same emotion on successive trials. This effect occurred for angry and for happy faces, not for neutral faces. It was completely abolished when faces were inverted instead of upright, suggesting that emotional categories rather than physical feature properties drive emotional priming of pop-out. The implications of the present findings for theoretical accounts of intertrial priming and for the face-in-the-crowd phenomenon are discussed.

  3. How do magnitude and frequency of monetary reward guide visual search?

    PubMed

    Won, Bo-Yeong; Leber, Andrew B

    2016-07-01

    How does reward guide spatial attention during visual search? In the present study, we examine whether and how two types of reward information-magnitude and frequency-guide search behavior. Observers were asked to find a target among distractors in a search display to earn points. We manipulated multiple levels of value across the search display quadrants in two ways: For reward magnitude, targets appeared equally often in each quadrant, and the value of each quadrant was determined by the average points earned per target; for reward frequency, we varied how often the target appeared in each quadrant but held the average points earned per target constant across the quadrants. In Experiment 1, we found that observers were highly sensitive to the reward frequency information, and prioritized their search accordingly, whereas we did not find much prioritization based on magnitude information. In Experiment 2, we found that magnitude information for a nonspatial feature (color) could bias search performance, showing that the relative insensitivity to magnitude information during visual search is not generalized across all types of information. In Experiment 3, we replicated the negligible use of spatial magnitude information even when we used limited-exposure displays to incentivize the expression of learning. In Experiment 4, we found participants used the spatial magnitude information during a modified choice task-but again not during search. Taken together, these findings suggest that the visual search apparatus does not equally exploit all potential sources of spatial value information; instead, it favors spatial reward frequency information over spatial reward magnitude information.

  4. Failures of perception in the low-prevalence effect: Evidence from active and passive visual search.

    PubMed

    Hout, Michael C; Walenchok, Stephen C; Goldinger, Stephen D; Wolfe, Jeremy M

    2015-08-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%-34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled.

  5. Transcranial magnetic stimulation reveals attentional feedback to area V1 during serial visual search.

    PubMed

    Dugué, Laura; Marque, Philippe; VanRullen, Rufin

    2011-01-01

    Visual search tasks have been used to understand how, where and when attention influences visual processing. Current theories suggest the involvement of a high-level "saliency map" that selects a candidate location to focus attentional resources. For a parallel (or "pop-out") task, the first chosen location is systematically the target, but for a serial (or "difficult") task, the system may cycle on a few distractors before finally focusing on the target. This implies that attentional effects upon early visual areas, involving feedback from higher areas, should be visible at longer latencies during serial search. A previous study from Juan & Walsh (2003) had used Transcranial Magnetic Stimulation (TMS) to support this conclusion; however, only a few post-stimulus delays were compared, and no control TMS location was used. Here we applied TMS double-pulses (sub-threshold) to induce a transient inhibition of area V1 at every post-stimulus delay between 100 ms and 500 ms (50 ms steps). The search array was presented either at the location affected by the TMS pulses (previously identified by applying several pulses at supra-threshold intensity to induce phosphene perception), or in the opposite hemifield, which served as a retinotopically-defined control location. Two search tasks were used: a parallel (+ among Ls) and a serial one (T among Ls). TMS specifically impaired the serial, but not the parallel search. We highlight an involvement of V1 in serial search 300 ms after the onset; conversely, V1 did not contribute to parallel search at delays beyond 100 ms. This study supports the idea that serial search differs from parallel search by the presence of additional cycles of a select-and-focus iterative loop between V1 and higher-level areas.

  6. Neural mechanisms of surround attenuation and distractor competition in visual search.

    PubMed

    Boehler, Carsten N; Tsotsos, John K; Schoenfeld, Mircea A; Heinze, Hans-Jochen; Hopf, Jens-Max

    2011-04-06

    Visual attention biases relevant processing in the visual system by amplifying relevant or attenuating irrelevant sensory input. A potential signature of the latter operation, referred to as surround attenuation, has recently been identified in the electromagnetic brain response of human observers performing visual search. It was found that a zone of attenuated cortical excitability surrounds the target when the search required increased spatial resolution for item discrimination. Here we address the obvious hypothesis that surround attenuation serves distractor suppression in the vicinity of the target where interference from irrelevant search items is maximal. To test this hypothesis, surround attenuation was assessed under conditions when the target was presented in isolation versus when it was surrounded by distractors. Surprisingly, substantial and indistinguishable surround attenuation was seen under both conditions, indicating that it reflects an attentional operation independent of the presence of distractors. Adding distractors in the target's surround, however, increased the amplitude of the N2pc--an evoked response known to index distractor competition in visual search. Moreover, adding distractors led to a topographical change of source activity underlying the N2pc toward earlier extrastriate areas. In contrast, the topography of reduced source activity due to surround attenuation remained unaltered with and without distractors in the target's surround. We conclude that surround attenuation is not a direct consequence of the attenuation of distractors in visual search and that it dissociates from attentional operations reflected by the N2pc. A theoretical framework is proposed that links both operations in a common model of top-down attentional selection in visual cortex.

  7. Decision processes in visual search as a function of target prevalence.

    PubMed

    Peltier, Chad; Becker, Mark W

    2016-09-01

    The probability of missing a target increases in low target prevalence search tasks. Wolfe and Van Wert (2010) propose 2 causes of this effect: reducing the quitting threshold, and conservatively shifting the decision making criterion used to evaluate each item. Reducing the quitting threshold predicts that target absent responses will be made without fully inspecting the display, increasing misses due to never inspecting the target (selection errors). The shift in decision criterion increases the likelihood of failing to recognize an inspected target (identification errors). Though there is robust evidence that target prevalence rates shift quitting thresholds, the proposed shift in decision making criterion has little support. In Experiment 1 we eye-tracked participants during searches of high, medium, and low prevalence. Eye movements were used to classify misses as selection or identification errors. Identification errors increased as prevalence decreased, supporting the claim that decision criterion becomes more conservative as prevalence decreases. In addition, as prevalence decreased, the dwell time on targets increased while dwell times on distractors decreased. We propose that the effect of prevalence on decision making for individual items is best modeled as a shift in criterion in a drift diffusion model, rather than signal detection, as drift diffusion accounts for this pattern of decision times. In Experiment 2 we replicate these findings while presenting stimuli in an rapid serial visual presentation (RSVP) stream. Experiments 1 and 2 were consistent with the conclusion that prevalence rate influences the item-by-item decision criterion, and are consistent with a drift diffusion model of this decision process. (PsycINFO Database Record

  8. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search.

    PubMed

    de Vries, Ingmar E J; van Driel, Joram; Olivers, Christian N L

    2017-02-08

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively, for a future task, and that should be prevented from guiding current attention. However, it remains unclear what electrophysiological mechanisms dissociate currently relevant (serving upcoming selection) from prospectively relevant memories (serving future selection). We measured EEG of 20 human subjects while they performed two consecutive visual search tasks. Before the search tasks, a cue instructed observers which item to look for first (current template) and which second (prospective template). During the delay leading up to the first search display, we found clear suppression of α band (8-14 Hz) activity in regions contralateral to remembered items, comprising both local power and interregional phase synchronization within a posterior parietal network. Importantly, these lateralization effects were stronger when the memory item was currently relevant (i.e., for the first search) compared with when it was prospectively relevant (i.e., for the second search), consistent with current templates being prioritized over future templates. In contrast, event-related potential analysis revealed that the contralateral delay activity was similar for all conditions, suggesting no difference in storage. Together, these findings support the idea that posterior α oscillations represent a state of increased processing or excitability in task-relevant cortical regions, and reflect enhanced cortical prioritization of memory representations that serve as a current selection filter.SIGNIFICANCE STATEMENT Our days are filled with looking for relevant objects while ignoring irrelevant visual information. Such visual search activity is thought to be driven by current goals activated in

  9. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search

    PubMed Central

    2017-01-01

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively, for a future task, and that should be prevented from guiding current attention. However, it remains unclear what electrophysiological mechanisms dissociate currently relevant (serving upcoming selection) from prospectively relevant memories (serving future selection). We measured EEG of 20 human subjects while they performed two consecutive visual search tasks. Before the search tasks, a cue instructed observers which item to look for first (current template) and which second (prospective template). During the delay leading up to the first search display, we found clear suppression of α band (8–14 Hz) activity in regions contralateral to remembered items, comprising both local power and interregional phase synchronization within a posterior parietal network. Importantly, these lateralization effects were stronger when the memory item was currently relevant (i.e., for the first search) compared with when it was prospectively relevant (i.e., for the second search), consistent with current templates being prioritized over future templates. In contrast, event-related potential analysis revealed that the contralateral delay activity was similar for all conditions, suggesting no difference in storage. Together, these findings support the idea that posterior α oscillations represent a state of increased processing or excitability in task-relevant cortical regions, and reflect enhanced cortical prioritization of memory representations that serve as a current selection filter. SIGNIFICANCE STATEMENT Our days are filled with looking for relevant objects while ignoring irrelevant visual information. Such visual search activity is thought to be driven by current goals activated

  10. Earthdata Search: Methods for Improving Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.; Crouch, M.; Siarto, J.; Sun, B.

    2015-12-01

    In a landscape of heterogeneous data from diverse sources and disciplines, producing useful tools poses a significant challenge. NASA's Earthdata Search application tackles this challenge, enabling discovery and inter-comparison of data across the wide array of scientific disciplines that use NASA Earth observation data. During this talk, we will give a brief overview of the application, and then share our approach for understanding and satisfying the needs of users from several disparate scientific communities. Our approach involves: - Gathering fine-grained metrics to understand user behavior - Using metrics to quantify user success - Combining metrics, feedback, and user research to understand user needs - Applying professional design toward addressing user needs - Using metrics and A/B testing to evaluate the viability of changes - Providing enhanced features for services to promote adoption - Encouraging good metadata quality and soliciting feedback for metadata issues - Open sourcing the application and its components to allow it to serve more users

  11. Faster target selection in preview visual search depends on luminance onsets: behavioral and electrophysiological evidence.

    PubMed

    Kiss, Monika; Eimer, Martin

    2011-08-01

    To investigate how target detection in visual search is modulated when a subset of distractors is presented in advance (preview search), we measured search performance and the N2pc component as an electrophysiological marker of attentional target selection. Targets defined by a color/shape conjunction were detected faster and the N2pc emerged earlier in preview search relative to a condition in which all items were presented simultaneously. Behavioral and electrophysiological preview benefits disappeared when stimuli were equiluminant with their background, in spite of the fact that targets were feature singletons among the new items in preview search. The results demonstrate that previewing distractors expedites the spatial selection of targets at early sensory-perceptual stages, and that these preview benefits depend on rapid attentional capture by luminance onsets.

  12. Display format and highlight validity effects on search performance using complex visual displays

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim; O'Brien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  13. Contrasting vertical and horizontal representations of affect in emotional visual search.

    PubMed

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.

  14. A visual search examination of attentional biases among individuals with high and low drive for thinness.

    PubMed

    Janelle, C M; Hausenblas, H A; Fallon, E A; Gardner, R E

    2003-06-01

    The purpose of this study was to examine attentional biases through visual search patterns of 40 females with high (high-risk for eating disorders) or low (low-risk for eating disorders) levels of drive for thinness and body dissatisfaction while viewing slides depicting ectomorphic, mesomorphic, and endomorphic female body shapes. Participants were outfitted in an eye tracking system, which was used to collect gaze behavior data while viewing the slides. Fixation frequency and duration to five body locations were analyzed through the use of ASL EYENAL software. For the mesomorphic, ectomorphic, and endomorphic slides, the low-risk group looked significantly more often at the leg region than the high-risk group. The low-risk group also gazed significantly longer at the leg region than the high-risk group when viewing the mesomorphic and ectomorphic slides. For the endomorphic slides, the low-risk group focused significantly longer on the midsection than did the high-risk group. The findings suggest avoidance behaviors among the high-risk group that are reflected in their locus of attention, and indicate that negative affect among high-risk individuals may be induced by selective attention to particular environmental cues. An integrative theoretical account emanating from cognitive, social, and behaviorist approaches to understanding attentional biases in body disturbance is used to explain the findings.

  15. Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

    2014-06-01

    Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

  16. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  17. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  18. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  19. Implicit short- and long-term memory direct our gaze in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-04-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.

  20. The Development of Visual Search in Infancy: Attention to Faces versus Salience

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2016-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and…

  1. Visual search performance is predicted by both prestimulus and poststimulus electrical brain activity

    PubMed Central

    van den Berg, Berry; Appelbaum, Lawrence G.; Clark, Kait; Lorist, Monicque M.; Woldorff, Marty G.

    2016-01-01

    An individual’s performance on cognitive and perceptual tasks varies considerably across time and circumstances. We investigated neural mechanisms underlying such performance variability using regression-based analyses to examine trial-by-trial relationships between response times (RTs) and different facets of electrical brain activity. Thirteen participants trained five days on a color-popout visual-search task, with EEG recorded on days one and five. The task was to find a color-popout target ellipse in a briefly presented array of ellipses and discriminate its orientation. Later within a session, better preparatory attention (reflected by less prestimulus Alpha-band oscillatory activity) and better poststimulus early visual responses (reflected by larger sensory N1 waves) correlated with faster RTs. However, N1 amplitudes decreased by half throughout each session, suggesting adoption of a more efficient search strategy within a session. Additionally, fast RTs were preceded by earlier and larger lateralized N2pc waves, reflecting faster and stronger attentional orienting to the targets. Finally, SPCN waves associated with target-orientation discrimination were smaller for fast RTs in the first but not the fifth session, suggesting optimization with practice. Collectively, these results delineate variations in visual search processes that change over an experimental session, while also pointing to cortical mechanisms underlying performance in visual search. PMID:27901053

  2. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  3. Can a short nap and bright light function as implicit learning and visual search enhancers?

    PubMed

    Kaida, Kosuke; Takeda, Yuji; Tsuzuki, Kazuyo

    2012-01-01

    The present study examined effects of a short nap (20 min) and/or bright light (2000 lux) on visual search and implicit learning in a contextual cueing task. Fifteen participants performed a contextual cueing task twice a day (1200-1330 h and 1430-1600 h) and scored subjective sleepiness before and after a short afternoon nap or a break period. Participants served a total of four experimental conditions (control, short nap, bright light and short nap with bright light). During the second task, bright light treatment (BLT) was applied in the two of the four conditions. Participants performed both tasks in a dimly lit environment except during the light treatment. Results showed that a short nap reduced subjective sleepiness and improved visual search time, but it did not affect implicit learning. Bright light reduced subjective sleepiness. A short nap in the afternoon could be a countermeasure against sleepiness and an enhancer for visual search. Practitioner Summary: The study examined effects of a short afternoon nap (20 min) and/or bright light (2000 lux) on visual search and implicit learning. A short nap is a powerful countermeasure against sleepiness compared to bright light exposure in the afternoon.

  4. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  5. Visual Search for Object Orientation Can Be Modulated by Canonical Orientation

    ERIC Educational Resources Information Center

    Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian

    2005-01-01

    The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…

  6. Temporal Binding and Segmentation in Visual Search: A Computational Neuroscience Analysis.

    PubMed

    Mavritsaki, Eirini; Humphreys, Glyn

    2016-10-01

    Human visual search operates not only over space but also over time, as old items remain in the visual field and new items appear. Preview search (where one set of distractors appears before the onset of a second set) has been used as a paradigm to study search over time and space [Watson, D. G., & Humphreys, G. W. Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review, 104, 90-122, 1997], with participants showing efficient search when old distractors can be ignored and new targets prioritized. The benefits of preview search are lost, however, if a temporal gap is introduced between a first presentation of the old items and the re-presentation of all the items in the search display [Kunar, M. A., Humphreys, G. W., & Smith, K. J. History matters: The preview benefit in search is not onset capture. Psychological Science, 14, 181-185, 2003a], consistent with the old items being bound by temporal onset to the new stimuli. This effect of temporal binding can be eliminated if the old items reappear briefly before the new items, indicating also a role for the memory of the old items. Here we simulate these effects of temporal coding in search using the spiking search over time and space model [Mavritsaki, E., Heinke, D., Allen, H., Deco, G., & Humphreys, G. W. Bridging the gap between physiology and behavior: Evidence from the sSoTS model of human visual attention. Psychological Review, 118, 3-41, 2011]. We show that a form of temporal binding by new onsets has to be introduced to the model to simulate the effects of a temporal gap, but that effects of the memory of the old item can stem from continued neural suppression across a temporal gap. We also show that the model can capture the effects of brain lesion on preview search under the different temporal conditions. The study provides a proof-of-principle analysis that neural suppression and temporal binding can be sufficient to account for human

  7. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations.

  8. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study

    PubMed Central

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-01-01

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in “pop-out” or “search” condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the “search” condition, while PPC is mainly involved in detecting “pop-out” targets. PMID:27452715

  9. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  10. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean

  11. Dimension-specific signal modulation in visual search: evidence from inter-stimulus surround suppression.

    PubMed

    Chan, Louis K H; Hayward, William G

    2012-04-18

    A fundamental task for the visual system is to determine where to attend next. In general, attention is guided by visual saliency. Computational models suggest that saliency values are estimated through an iterative process in which each visual item suppresses each other item's saliency, especially for those with close proximity. To investigate this proposal, we tested the effect of two salient distractors on visual search for a size target. While fixing the target-to-distractor distance, we manipulated the distance between two distractors. If two salient distractors suppressed each other when they were close together, they should interfere with search less; this was exactly what we found. However, we observed such a distance effect only for distractors of the same dimension (e.g., both defined in color) but not for those of different dimensions (e.g., one defined in color and the other in shape), displaying specificity to a perceptual dimension. Therefore, we conclude that saliency in visual search is calculated through a surround suppression process that occurs at a dimension-specific level.

  12. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes.

  13. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  14. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  15. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  16. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects.

  17. Visual search for faces by race: a cross-race study.

    PubMed

    Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun

    2013-08-30

    Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed.

  18. Working Memory Capacity Predicts Selection and Identification Errors in Visual Search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2016-11-17

    As public safety relies on the ability of professionals, such as radiologists and baggage screeners, to detect rare targets, it could be useful to identify predictors of visual search performance. Schwark, Sandry, and Dolgov found that working memory capacity (WMC) predicts hit rate and reaction time in low prevalence searches. This link was attributed to higher WMC individuals exhibiting a higher quitting threshold and increasing the probability of finding the target before terminating search in low prevalence search. These conclusions were limited based on the methods; without eye tracking, the researchers could not differentiate between an increase in accuracy due to fewer identification errors (failing to identify a fixated target), selection errors (failing to fixate a target), or a combination of both. Here, we measure WMC and correlate it with reaction time and accuracy in a visual search task. We replicate the finding that WMC predicts reaction time and hit rate. However, our analysis shows that it does so through both a reduction in selection and identification errors. The correlation between WMC and selection errors is attributable to increased quitting thresholds in those with high WMC. The correlation between WMC and identification errors is less clear, though potentially attributable to increased item inspection times in those with higher WMC. In addition, unlike Schwark and coworkers, we find that these WMC effects are fairly consistent across prevalence rates rather than being specific to low-prevalence searches.

  19. Neural correlates of visual search in patients with hereditary retinal dystrophies.

    PubMed

    Plank, Tina; Frolo, Jozef; Farzana, Fatima; Brandl-Rühle, Sabine; Renner, Agnes B; Greenlee, Mark W

    2013-10-01

    In patients with central visual field scotomata a large part of visual cortex is not adequately stimulated. We investigated evidence for possible upregulation in cortical responses in 22 patients (8 females, 14 males; mean age 41.5 years, range 12-65 years) with central visual field loss due to hereditary retinal dystrophies (Stargardt's disease, other forms of hereditary macular dystrophies and cone-rod dystrophy) and compared their results to those of 22 age-matched controls (11 females, 11 males; mean age, 42.4 years, range, 13-70 years). Using functional magnetic resonance imaging (fMRI) we recorded differences in behavioral and BOLD signal distribution in retinotopic mapping and visual search tasks. Patients with an established preferred retinal locus (PRL) exhibited significantly higher activation in early visual cortex during the visual search task, especially on trials when the target stimuli fell in the vicinity of the PRL. Compared with those with less stable fixation, patients with stable eccentric fixation at the PRL exhibited greater performance levels and more brain activation.

  20. Pattern Visual Evoked Potential Changes in Diabetic Patients without Retinopathy

    PubMed Central

    Sungur, Gulten; Yakin, Mehmet; Unlu, Nurten; Balta, Oyku Bezen; Ornek, Firdevs

    2017-01-01

    Purpose. To assess the different check sizes of pattern visual evoked potential (PVEP) in diabetic patients without retinopathy according to HbA1c levels and diabetes duration. Methods. Fifty-eight eligible patients with type 2 diabetes mellitus and 26 age- and sex-matched healthy controls were included in the study. Only the right eye of each patient was analyzed. All of the patients underwent a comprehensive ophthalmic examination, and the PVEPs were recorded. Results. There was a statistically significant difference in P100 latency in 1-degree check size and in N135 latency in 2-degree check size between controls and patient groups which have different HbA1c levels. There were statistically significant, positive, and weak correlations with diabetes duration and P100 latency in 7-minute and 15-minute check sizes and N135 latency in 15-minute check size. Conclusions. It was showed that there were prolongations in P100 latency only in 1-degree check size and in N135 only in 2-degree check size in diabetic patients without retinopathy. There was statistically significant correlation between diabetes duration and P100 and N135 latencies in different check sizes. PMID:28392940

  1. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  2. Attention to Quantitative and Configural Properties of Abstract Visual Patterns by Children and Adults.

    ERIC Educational Resources Information Center

    Mendelson, Morton J.

    1984-01-01

    Students in grades two, four, six, and college sorted abstract visual patterns that varied both in amount of contour and in type of visual organization (unstructured, simple symmetries, multiple symmetries, and rotational). Results suggested that children attend to both amount of contour and visual organization, but that attention to visual…

  3. Convex hull test of the linear separability hypothesis in visual search.

    PubMed

    Bauer, B; Jolicoeur, P; Cowan, W B

    1999-08-01

    Visual search for a colour target in distractors of two other colours is dramatically affected by the configuration of the colours in CIE (x, y) space. To a first approximation, search is difficult when a target's chromaticity falls directly between (i.e. is not linearly separable from) two distractor chromaticities, otherwise search is easy (D'Zmura [1991, Vision Research, 31, 951-966]; Bauer, Jolicoeur, & Cowan [1996a, Vision Research, 36, 1439-1466]; Bauer, Jolicoeur, & Cowan [1996b, Perception, 25, 1282-1294]). In this paper, we demonstrate that the linear separability effect transcends the two distractor case. Placing a target colour inside the convex hull defined by a set of distractors hindered search performance compared with a target placed outside the convex hull. This is true whether the target was linearly separable in chromaticity only (Experiments 1 and 2), or in a combination of luminance and chromaticity (Experiments 3 and 4).

  4. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  5. The effect of cerebral asymmetries and eye scanning on pseudoneglect for a visual search task.

    PubMed

    Nicholls, Michael E R; Hobson, Amelia; Petty, Joanne; Churches, Owen; Thomas, Nicole A

    2017-02-01

    Pseudoneglect is the tendency for the general population to over-attend to the left. While pseudoneglect is classically demonstrated using line bisection, it also occurs for visual search. The current study explored the influence of eye movements and functional cerebral asymmetry on asymmetries for visual search. In Experiment 1, 24 participants carried out a conjunction search for a target within a rectangular array. A leftward advantage for detecting targets was observed when the eyes were free to move, but not when they were restricted by short exposure durations. In Experiment 2, the effect of functional cerebral asymmetry was explored by comparing 20 right-handers and 19 left-handers. Results showed a stronger leftward bias for the right-handers, consistent with a mechanism related to cerebral asymmetry. In Experiment 3, an eye-tracker directly controlled eye movements in 25 participants. A leftward advantage emerged when the eyes were still, but not when they were free to move. Experiments 1 and 3 produced contradictory results in relation to eye movements, which may be related to task-related demands. On balance, the data suggest that asymmetries in visual search can occur in the absence of eye movements and that they are related to right hemisphere specialisation for spatial attention.

  6. Visual ability and searching behavior of adult Laricobius nigrinus, a hemlock woolly adelgid predator.

    PubMed

    Mausel, D L; Salom, S M; Kok, L T

    2011-01-01

    Very little is known about the searching behavior and sensory cues that Laricobius spp. (Coleoptera: Derodontidae) predators use to locate suitable habitats and prey, which limits our ability to collect and monitor them for classical biological control of adelgids (Hemiptera: Adelgidae). The aim of this study was to examine the visual ability and the searching behavior of newly emerged L. nigrinus Fender, a host-specific predator of the hemlock woolly adelgid, Adelges tsugae Annand (Hemiptera: Phylloxeroidea: Adelgidae). In a laboratory bioassay, individual adults attempting to locate an uninfested eastern hemlock seedling under either light or dark conditions were observed in an arena. In another bioassay, individual adults searching for prey on hemlock seedlings (infested or uninfested) were continuously video-recorded. Beetles located and began climbing the seedling stem in light significantly more than in dark, indicating that vision is an important sensory modality. Our primary finding was that searching behavior of L. nigrinus, as in most species, was related to food abundance. Beetles did not fly in the presence of high A. tsugae densities and flew when A. tsugae was absent, which agrees with observed aggregations of beetles on heavily infested trees in the field. At close range of prey, slow crawling and frequent turning suggest the use of non-visual cues such as olfaction and contact chemoreception. Based on the beetles' visual ability to locate tree stems and their climbing behavior, a bole trap may be an effective collection and monitoring tool.

  7. Prevalence learning and decision making in a visual search task: an equivalent ideal observer approach

    NASA Astrophysics Data System (ADS)

    He, Xin; Samuelson, Frank; Zeng, Rongping; Sahiner, Berkman

    2015-03-01

    Research studies have observed an influence of target prevalence on observer performance for visual search tasks. The goal of this work is to develop models for prevalence effects on visual search. In a recent study by Wolfe et. al, a large scale observer study was conducted to understand the effects of varying target prevalence on visual search. Particularly, a total of 12 observers were recruited to perform 1000 trials of simulated baggage search as target prevalence varied sinusoidally from high to low and back to high. We attempted to model observers' behavior in prevalence learning and decision making. We modeled the observer as an equivalent ideal observer (EIO) with a prior belief of the signal prevalence. The use of EIO allows the application of ideal observer mathematics to characterize real observers' performance reading real-life images. For every given new image, the observer updates the belief on prevalence and adjusts his/her decision threshold according to utility theory. The model results agree well with the experimental results from the Wolfe study. The proposed models allow theoretical insights into observer behavior in learning prevalence and adjusting their decision threshold.

  8. Visual search in hunting archerfish shares all hallmarks of human performance.

    PubMed

    Rischawy, Ingo; Schuster, Stefan

    2013-08-15

    Archerfish are renowned for shooting down aerial prey with water jets, but nothing is known about how they spot prey items in their richly structured mangrove habitats. We trained archerfish to stably assign the categories 'target' and 'background' to objects solely on the basis of non-motion cues. Unlike many other hunters, archerfish are able to discriminate a target from its background in the complete absence of either self-motion or relative motion parallax cues and without using stored information about the structure of the background. This allowed us to perform matched tests to compare the ways fish and humans scan stationary visual scenes. In humans, visual search is seen as a doorway to cortical mechanisms of how attention is allocated. Fish lack a cortex and we therefore wondered whether archerfish would differ from humans in how they scan a stationary visual scene. Our matched tests failed to disclose any differences in the dependence of response time distributions, a most sensitive indicator of the search mechanism, on number and complexity of background objects. Median and range of response times depended linearly on the number of background objects and the corresponding effective processing time per item increased similarly - approximately fourfold - in both humans and fish when the task was harder. Archerfish, like humans, also systematically scanned the scenery, starting with the closest object. Taken together, benchmark visual search tasks failed to disclose any difference between archerfish - who lack a cortex - and humans.

  9. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

    PubMed

    Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

    2015-01-01

    Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.

  10. Priming of Visual Search Facilitates Attention Shifts: Evidence From Object-Substitution Masking.

    PubMed

    Kristjánsson, Árni

    2016-03-01

    Priming of visual search strongly affects visual function, releasing items from crowding and during free-choice primed targets are chosen over unprimed ones. Two accounts of priming have been proposed: attentional facilitation of primed features and postperceptual episodic memory retrieval that involves mapping responses to visual events. Here, well-known masking effects were used to assess the two accounts. Object-substitution masking has been considered to reflect attentional processing: It does not occur when a target is precued and is strengthened when distractors are present. Conversely, metacontrast masking has been connected to lower level processing where attention exerts little effect. If priming facilitates attention shifts, it should mitigate object-substitution masking, while lower level masking might not be similarly influenced. Observers searched for an odd-colored target among distractors. Unpredictably (on 20% of trials), object-substitution masks or metacontrast masks appeared around the target. Object-substitution masking was strongly mitigated for primed target colors, while metacontrast masking was mostly unaffected. This argues against episodic retrieval accounts of priming, placing the priming locus firmly within the realm of attentional processing. The results suggest that priming of visual search facilitates attention shifts to the target, which allows better spatiotemporal resolution that overcomes object-substitution masking.

  11. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the

  12. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  13. Context-dependent interactions of left posterior inferior frontal gyrus in a local visual search task unrelated to language.

    PubMed

    Manjaly, Zina M; Marshall, John C; Stephan, Klaas E; Gurd, Jennifer M; Zilles, Karl; Fink, Gereon R

    2005-01-01

    The Embedded Figures Task (EFT) involves search for a target hidden in a complex geometric pattern. Even though the EFT is designed to probe local visual search functions, not language-related processes, neuropsychological studies have demonstrated a strong association between aphasia and impairment on this task. A potential explanation for this relationship was offered by a recent functional MRI study (Manjaly et al., 2003), which demonstrated that a part of the left posterior inferior frontal gyrus (pIFG), overlapping with Broca's region, is crucially involved in the execution of the EFT. This result suggested that pIFG, an area strongly associated with language-related functions, is also part of a network subserving cognitive functions unrelated to language. In this study, we tested this conjecture by analysing the data of Manjaly et al. for context-dependent functional interactions of the pIFG during execution of the EFT. The results showed that during EFT, compared to a similar visual matching task with minimal local search components, pIFG changed its interactions with areas commonly involved in visuospatial processing: Increased contributions to neural activity in left posterior parietal cortex, cerebellar vermis, and extrastriate areas bilaterally, as well as decreased contributions to bilateral temporo-parietal cortex, posterior cingulate cortex, and left dorsal premotor cortex were found. These findings demonstrate that left pIFG can be involved in nonlanguage processes. More generally, however, they provide a concrete example of the notion that there is no general one-to-one mapping between cognitive functions and the activations of individual areas. Instead, it is the spatiotemporal pattern of functional interactions between areas that is linked to a particular cognitive context.

  14. Integrating space and time in visual search: how the preview benefit is modulated by stereoscopic depth.

    PubMed

    Dent, Kevin; Braithwaite, Jason J; He, Xun; Humphreys, Glyn W

    2012-07-15

    We examined visual search for letters that were distributed across both 3 dimensional space, and time. In Experiment 1, when participants had foreknowledge of the depth plane and time interval where targets could appear, search was more efficient if the items could be segmented either by depth or by time (with a 1000 ms preview), and there were increased benefits when the two cues (depth and time) were combined. In Experiments 2 and 3 the target depth plane was always unknown to the participant. In this case, depth cues alone did not facilitate search, though they continued to increase the preview benefit. In Experiment 4 new items in preview search could fall at the same depth as preview items or a new depth. There was a substantial cost to search if the target appeared at a previewed depth. Experiment 5 showed that this cost remained even when participants knew the target would appear at the old depth on 75% of trials. The results indicate that spatial (depth) and temporal cues combine to enhance visual segmentation and selection, and this is accomplished by inhibition of distractors in irrelevant depth planes.

  15. Adding a Visualization Feature to Web Search Engines: It’s Time

    SciTech Connect

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  16. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and

  17. Is Reading Impairment Associated with Enhanced Holistic Processing in Comparative Visual Search?

    PubMed

    Wang, Jiahui; Schneps, Matthew H; Antonenko, Pavlo D; Chen, Chen; Pomplun, Marc

    2016-11-01

    This study explores a proposition that individuals with dyslexia develop enhanced peripheral vision to process visual-spatial information holistically. Participants included 18 individuals diagnosed with dyslexia and 18 who were not. The experiment used a comparative visual search design consisting of two blocks of 72 trials. Each trial presented two halves of the display each comprising three kinds of shapes in three colours to be compared side-by-side. Participants performed a conjunctive search to ascertain whether the two halves were identical. In the first block, participants were provided no instruction regarding the visual-spatial processing strategy they were to employ. In the second block, participants were instructed to use a holistic processing strategy-to defocus their attention and perform the comparison by examining the whole screen at once. The results did not support the hypothesis associating dyslexia with talents for holistic visual processing. Using holistic processing strategy, both groups scored lower in accuracy and reacted faster, compared to the first block. Impaired readers consistently reacted more slowly and did not exhibit enhanced accuracy. Given the extant evidence of strengths for holistic visual processing in impaired readers, these findings are important because they suggest such strengths may be task dependent. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Visual search in school-aged children with unilateral brain lesions.

    PubMed

    Netelenbos, J Bernard; Van Rooij, Louise

    2004-05-01

    In this preliminary study, visual search for targets within and beyond the initial field of view was investigated in seven school-aged children (five females, two males; mean age at testing 8 years 10 months, SD 1 year 3 months; range 6 to 10 years) with various acquired, postnatal, focal brain injuries (haematoma, haemorrhage, meningioma, neuroblastoma, and cerebral abscess) in anterior or posterior sites of the left or right hemisphere, and seven control children (matched for age and sex) were also studied. All participants attended mainstream primary schools. The children with lesions underwent surgery after diagnosis (mean age at diagnosis 5 years 4 months, SD 2 years 7 months). Group results indicated that for the overall scores on three psychometric tests of visuospatial and fine motor abilities (Southern California Figure Ground Perception Test, Visual Organization Test, and Visual-Motor Integration Test), no difference between the children with left and right lesions was present. However, children with lesions in the right hemisphere, and not in the left hemisphere, took significantly more time than the controls to locate visual targets presented within and beyond the field of view. Examination of individual data suggested that, in accordance with brain imaging research, right-sided anterior cerebral lesions sustained in early childhood might have an enduring detrimental effect on voluntary visual search performance during development. This persistent effect of early brain injury might imply that developmental plasticity of the brain does not apply to certain specific functions of particular areas of the right hemisphere.

  19. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  20. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  1. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  2. Evaluation of a dichromatic color-appearance simulation by a visual search task

    NASA Astrophysics Data System (ADS)

    Sunaga, Shoji; Ogura, Tomomi; Seno, Takeharu

    2013-03-01

    We used a visual search task to investigate the validity of the dichromatic simulation model proposed by Brettel et al. Although the dichromatic simulation could qualitatively predict reaction times for color-defective observers, the reaction times for color-defective observers tended to be longer than those of the trichromatic observers in Experiment 1. In Experiment 2, we showed that a reduction of purity excitation of simulated colors can provide a good prediction. Further, we propose an adaptive dichromatic simulation model based on the color differences between a simulated target color and simulated distractor colors in order to obtain a better quantitative prediction of reaction times in the visual search task for color defects.

  3. Compliance instead of flexibility? On age-related differences in cognitive control during visual search.

    PubMed

    Mertes, Christine; Wascher, Edmund; Schneider, Daniel

    2017-02-11

    The effect of healthy aging on cognitive control of irrelevant visual information was investigated by using event-related potentials. Participants performed a spatial cuing task where an irrelevant color cue that was either contingent (color search) or noncontingent (shape search) on the attentional set was presented before a target with different stimulus-onset asynchronies. In the contingent condition, attentional capture appeared independent of age and persisted over the stimulus-onset asynchronies but was markedly pronounced for elderly people. Accordingly, event-related potential analyses revealed that both older and younger adults initially selected the irrelevant cue when it was contingent on the attentional set and transferred spatial cue information into working memory. However, only younger adults revealed inhibitory mechanisms to compensate for attentional capture. It is proposed that this age-related lack of reactive inhibition leads to stickiness in visual processing whenever information is contingent on the attentional set, unveiling older adults' "Achilles' heel" in cognitive control.

  4. Visual Search in the Detection of Retinal Injury: A Feasibility Study

    DTIC Science & Technology

    2013-04-01

    D, Heyes A. et al. Mobility of people with retinitis pigmentosa as a function of vision and psychological variables. Optometry and Vision Science...AFRL-RH-FS-TR-2013-0019 Visual Search in the Detection of Retinal Injury: A Feasibility Study Thomas Kuyk TASC, Inc. Lei Liu The...Detection of Retinal Injury: A Feasibility Study" 2013 0019 LEON N. McLIN, JR., DR-III, DAF Work Unit Manager 711 HPW/ RHDO POLHAMUS.GARR ETT.D

  5. Improvement in Visual Search with Practice: Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    PubMed Central

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus–response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior–contralateral component (N2pc, 170–250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300–400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. PMID:25834059

  6. Gender Differences in Patterns of Searching the Web

    ERIC Educational Resources Information Center

    Roy, Marguerite; Chi, Michelene T. H.

    2003-01-01

    There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

  7. White matter hyperintensities are associated with visual search behavior independent of generalized slowing in aging

    PubMed Central

    Lockhart, Samuel N.; Roach, Alexandra E.; Luck, Steven J.; Geng, Joy; Beckett, Laurel; Carmichael, Owen; DeCarli, Charles

    2014-01-01

    A fundamental controversy is whether cognitive decline with advancing age can be entirely explained by decreased processing speed, or whether specific neural changes can elicit cognitive decline, independent of slowing. These hypotheses are anchored by studies of healthy older individuals where age is presumed the sole influence. Unfortunately, advancing age is also associated with asymptomatic brain white matter injury. We hypothesized that differences in white matter injury extent, manifest by MRI white matter hyperintensities (WMH), mediate differences in visual attentional control in healthy aging, beyond processing speed differences. We tested young and cognitively healthy older adults on search tasks indexing speed and attentional control. Increasing age was associated with generally slowed performance. WMH was also associated with slowed search times independent of processing speed differences. Consistent with evidence attributing reduced network connectivity to WMH, these results conclusively demonstrate that clinically silent white matter injury contributes to slower search performance indicative of compromised cognitive control, independent of generalized slowing of processing speed. PMID:24183716

  8. The Development of Visual Search in Infants and Very Young Children.

    ERIC Educational Resources Information Center

    Gerhardstein, Peter; Rovee-Collier, Carolyn

    2002-01-01

    Trained 1- to 3-year-olds to touch a video screen displaying a unique target and appearing among varying numbers of distracters; correct responses triggered a sound and four animated objects on the screen. Found that children's reaction time patterns resembled those from adults in corresponding search tasks, suggesting that basic perceptual…

  9. Training shortens search times in children with visual impairment accompanied by nystagmus

    PubMed Central

    Huurneman, Bianca; Boonstra, F. Nienke

    2014-01-01

    Perceptual learning (PL) can improve near visual acuity (NVA) in 4–9 year old children with visual impairment (VI). However, the mechanisms underlying improved NVA are unknown. The present study compares feature search and oculomotor measures in 4–9 year old children with VI accompanied by nystagmus (VI+nys [n = 33]) and children with normal vision (NV [n = 29]). Children in the VI+nys group were divided into three training groups: an experimental PL group, a control PL group, and a magnifier group. They were seen before (baseline) and after 6 weeks of training. Children with NV were only seen at baseline. The feature search task entailed finding a target E among distractor E's (pointing right) with element spacing varied in four steps: 0.04°, 0.5°, 1°, and 2°. At baseline, children with VI+nys showed longer search times, shorter fixation durations, and larger saccade amplitudes than children with NV. After training, all training groups showed shorter search times. Only the experimental PL group showed prolonged fixation duration after training at 0.5° and 2° spacing, p's respectively 0.033 and 0.021. Prolonged fixation duration was associated with reduced crowding and improved crowded NVA. One of the mechanisms underlying improved crowded NVA after PL in children with VI+nys seems to be prolonged fixation duration. PMID:25309473

  10. Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.

    PubMed

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria

    2014-11-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information.

  11. Visual search for conjunctions of physical and numerical size shows that they are processed independently.

    PubMed

    Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D

    2017-03-01

    The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record

  12. Visual search and coordination changes in response to video and point-light demonstrations without KR.

    PubMed

    Horn, R R; Williams, A M; Scott, M A; Hodges, N J

    2005-07-01

    The authors examined the observational learning of 24 participants whom they constrained to use the model by removing intrinsic visual knowledge of results (KR). Matched participants assigned to video (VID), point-light (PL), and no-model (CON) groups performed a soccer-chipping task in which vision was occluded at ball contact. Pre- and posttests were interspersed with alternating periods of demonstration and acquisition. The authors assessed delayed retention 2-3 days later. In support of the visual perception perspective, the participants who observed the models showed immediate and enduring changes to more closely imitate the model's relative motion. While observing the demonstration, the PL group participants were more selective in their visual search than were the VID group participants but did not perform more accurately or learn more.

  13. More efficient rejection of happy than of angry face distractors in visual search.

    PubMed

    Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich

    2006-12-01

    In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.

  14. Identical and Reverse Visual Pattern Recognition in Deaf Children.

    ERIC Educational Resources Information Center

    Bragman, Ruth; Hardy, Robert

    1979-01-01

    Describes a study that investigated the development of pattern recognition and pattern reversal in 20 deaf children aged six through eight and its relation to age of exposure to a gestural symbol system. (Author/DS)

  15. Optimization of boiling water reactor control rod patterns using linear search

    SciTech Connect

    Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.

    1984-10-01

    A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor.

  16. A Pattern Search Filter Method for Nonlinear Programming Without Derivatives

    DTIC Science & Technology

    2003-06-12

    this context. Optimality conditions for a differentiable function can be stated in terms of the cone generated by the convex hull of a set S, i.e...Corollary 5.10. It gives conditions for the limit point of a refining sequence to satisfy optimality conditions on problem (1.1). It is that the convex ...useful division into global SEARCH and local POLL steps. It is shown here that the algorithm identifies limit points at which optimality conditions

  17. Dissociation of visual and auditory pattern discrimination functions within the cat's temporal cortex.

    PubMed

    Cornwell, P; Nudo, R J; Straussfogel, D; Lomber, S G; Payne, B R

    1998-08-01

    In ablation-behavior experiments performed in adult cats, a double dissociation was demonstrated between ventral posterior suprasylvian cortex (vPS) and temporo-insular cortex (TI) lesions on complex visual and auditory tasks. Lesions of the vPS cortex resulted in deficits at visual pattern discrimination, but not at a difficult auditory discrimination. By contrast, TI lesions resulted in profound deficits at discriminating complex sounds, but not at discriminating visual patterns. This pattern of dissociation of deficits in cats parallels the dissociation of deficits after inferior temporal versus superior temporal lesions in monkeys and humans.

  18. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  19. The impact of clinical indications on visual search behaviour in skeletal radiographs

    NASA Astrophysics Data System (ADS)

    Rutledge, A.; McEntee, M. F.; Rainford, L.; O'Grady, M.; McCarthy, K.; Butler, M. L.

    2011-03-01

    The hazards associated with ionizing radiation have been documented in the literature and therefore justifying the need for X-ray examinations has come to the forefront of the radiation safety debate in recent years1. International legislation states that the referrer is responsible for the provision of sufficient clinical information to enable the justification of the medical exposure. Clinical indications are a set of systematically developed statements to assist in accurate diagnosis and appropriate patient management2. In this study, the impact of clinical indications upon fracture detection for musculoskeletal radiographs is analyzed. A group of radiographers (n=6) interpreted musculoskeletal radiology cases (n=33) with and without clinical indications. Radiographic images were selected to represent common trauma presentations of extremities and pelvis. Detection of the fracture was measured using ROC methodology. An eyetracking device was employed to record radiographers search behavior by analysing distinct fixation points and search patterns, resulting in a greater level of insight and understanding into the influence of clinical indications on observers' interpretation of radiographs. The influence of clinical information on fracture detection and search patterns was assessed. Findings of this study demonstrate that the inclusion of clinical indications result in impressionable search behavior. Differences in eye tracking parameters were also noted. This study also attempts to uncover fundamental observer search strategies and behavior with and without clinical indications, thus providing a greater understanding and insight into the image interpretation process. Results of this study suggest that availability of adequate clinical data should be emphasized for interpreting trauma radiographs.

  20. Identification of the ideal clutter metric to predict time dependence of human visual search

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.

  1. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  2. Visual-textual joint relevance learning for tag-based social image search.

    PubMed

    Gao, Yue; Wang, Meng; Zha, Zheng-Jun; Shen, Jialie; Li, Xuelong; Wu, Xindong

    2013-01-01

    Due to the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Comparative results of the experiments conducted on a dataset including 370+images are presented, which demonstrate the effectiveness of the proposed approach.

  3. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  4. Paying Attention: Being a Naturalist and Searching for Patterns.

    ERIC Educational Resources Information Center

    Weisberg, Saul

    1996-01-01

    Discusses the importance of recognizing patterns in nature to help understand the interactions of living and non-living things. Cautions the student not to lose sight of the details when studying the big picture. Encourages development of the ability to identify local species. Suggest two activities to strengthen observation skills and to help in…

  5. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  6. The effects of visual realism on search tasks in mixed reality simulation.

    PubMed

    Lee, Cha; Rincon, Gustavo A; Meyer, Greg; Höllerer, Tobias; Bowman, Doug A

    2013-04-01

    In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR.

  7. Neurocognitive Pattern Analysis of Auditory and Visual Information.

    DTIC Science & Technology

    1986-02-15

    time-varying processes Wigner distribution statistical pattern recognition spatial deconvolution current source density source localization event...4 Ill, NEUROCOGNITIVE PATTERN (NCP) ANALYSIS .. . . . . . . . 5 A. Overview . . . . . . . . . . . . . . . . . . . 5 B. Current Procedures...related waveforms . . . . . . , . . 49 (a) Isopotentials from Common Average Reference (b) Isopotentials from Current Source Density (c) Iso-dipole

  8. RF antenna-pattern visual aids for field use

    NASA Technical Reports Server (NTRS)

    Williams, J. H.

    1973-01-01

    Series of plots must be made of antenna pattern on polar-coordinate sheet depicting vertical planes. Separate sheets are plotted depicting antenna patterns in vertical plane at azimuth positions. After all polar plots are drawn, they are labeled according to their azimuthal positions. Transparencies are then stiffened with regular wire, cardboard, or molded plastic.

  9. Use Patterns of Visual Cues in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…

  10. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  11. Exploration on Building of Visualization Platform to Innovate Business Operation Pattern of Supply Chain Finance

    NASA Astrophysics Data System (ADS)

    He, Xiangjun; Tang, Lingyun

    Supply Chain Finance, as a new financing pattern, has been arousing general attentions of scholars at home and abroad since its publication. This paper describes the author's understanding towards supply chain finance, makes classification of its business patterns in China from different perspectives, analyzes the existing problems and deficiencies of the business patterns, and finally puts forward the notion of building a visualization platform to innovate the business operation patterns and risk control modes of domestic supply chain finance.

  12. Visual Signals Vertically Extend the Perceptual Span in Searching a Text: A Gaze-Contingent Window Study

    ERIC Educational Resources Information Center

    Cauchard, Fabrice; Eyrolle, Helene; Cellier, Jean-Marie; Hyona, Jukka

    2010-01-01

    This study investigated the effect of visual signals on perceptual span in text search and the kinds of signal information that facilitate the search. Participants were asked to find answers to specific questions in chapter-length texts in either a normal or a window condition, where the text disappeared beyond a vertical 3 degrees gaze-contingent…

  13. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf.

    PubMed

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z; Zhang, Fan; Gonçalves, Óscar F; Fang, Fang; Bi, Yanchao

    2015-11-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus-a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex.

  14. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  15. Disruptive coloration in cuttlefish: a visual perception mechanism that regulates ontogenetic adjustment of skin patterning.

    PubMed

    Barbosa, Alexandra; Mäthger, Lydia M; Chubb, Charles; Florio, Christopher; Chiao, Chuan-Chin; Hanlon, Roger T

    2007-04-01

    Among the changeable camouflage patterns of cuttlefish, disruptive patterning is shown in response to certain features of light objects in the visual background. However, whether animals show disruptive patterns is dependent not only on object size but also on their body size. Here, we tested whether cuttlefish (Sepia officinalis) are able to match their disruptive body patterning with increasing size of background objects as they grow from hatchling to adult size (0.7 to 19.6 cm mantle length; factor of 28). Specifically, do cuttlefish have a single ;visual sampling rule' that scales accurately during ontogeny? For each of seven size classes of cuttlefish, we created black and white checkerboards whose check sizes corresponded to 4, 12, 40, 120, 400 and 1200% of the area of the cuttlefish's White square, which is a neurophysiologically controlled component of the skin. Disruptive body patterns were evoked when, regardless of animal size, the check size measured either 40 or 120% of the area of the cuttlefish's White square, thus demonstrating a remarkable ontogenetic conformity to a single visual sampling rule. Cuttlefish have no known visual feedback loop with which to adjust their skin patterns. Since the area of a cuttlefish's White square skin component is a function of body size, our results indicate that cuttlefish are solving a visual scaling problem of camouflage presumably without visual confirmation of the size of their own skin component.

  16. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  17. Basic visual function and cortical thickness patterns in posterior cortical atrophy.

    PubMed

    Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J

    2011-09-01

    Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.

  18. Visual cluster analysis and pattern recognition template and methods

    SciTech Connect

    Osbourn, G.C.; Martinez, R.F.

    1993-12-31

    This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  19. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  20. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  1. Improved antenna pattern recorder provides visual display of RF power

    NASA Technical Reports Server (NTRS)

    Lipin, R., Jr.

    1970-01-01

    Antenna pattern recording system has a discretionary signal level monitor which senses a specified minimum level occurring between sampling intervals. This enables RF power and percent coverage to be calculated more accurately.

  2. Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues

    PubMed Central

    Kohlbecher, Stefan; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In “positive cue” blocks, participants were informed about possible colors of the target item. In “negative cue” blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line (“relevant items”). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value. PMID:26717307

  3. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  4. Clarifying the role of pattern separation in schizophrenia: the role of recognition and visual discrimination deficits.

    PubMed

    Martinelli, Cristina; Shergill, Sukhwinder S

    2015-08-01

    Patients with schizophrenia show marked memory deficits which have a negative impact on their functioning and life quality. Recent models suggest that such deficits might be attributable to defective pattern separation (PS), a hippocampal-based computation involved in the differentiation of overlapping stimuli and their mnemonic representations. One previous study on the topic concluded in favour of pattern separation impairments in the illness. However, this study did not clarify whether more elementary recognition and/or visual discrimination deficits could explain observed group differences. To address this limitation we investigated pattern separation in 22 schizophrenic patients and 24 healthy controls with the use of a task requiring individuals to classify stimuli as repetitions, novel or similar compared to a previous familiarisation phase. In addition, we employed a visual discrimination task involving perceptual similarity judgments on the same images. Results revealed impaired performance in the patient group; both on baseline measure of pattern separation as well as an index of pattern separation rigidity. However, further analyses demonstrated that such differences could be fully explained by recognition and visual discrimination deficits. Our findings suggest that pattern separation in schizophrenia is predicated on earlier recognition and visual discrimination problems. Furthermore, we demonstrate that future studies on pattern separation should include appropriate measures of recognition and visual discrimination performance for the correct interpretation of their findings.

  5. Effects of set-size and lateral masking in visual search.

    PubMed

    Põder, Endel

    2004-01-01

    In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.

  6. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  7. Effects of search efficiency on surround suppression during visual selection in frontal eye field.

    PubMed

    Schall, Jeffrey D; Sato, Takashi R; Thompson, Kirk G; Vaughn, Amanda A; Juan, Chi-Hung

    2004-06-01

    Previous research has shown that visually responsive neurons in the frontal eye field of macaque monkeys select the target for a saccade during efficient, pop-out visual search through suppression of the representation of the nontarget distractors. For a fraction of these neurons, the magnitude of this distractor suppression varied with the proximity of the target to the receptive field, exhibiting more suppression of the distractor representation when the target was nearby than when the target was distant. The purpose of this study was to determine whether the variation of distractor suppression related to target proximity varied with target-distractor feature similarity. The effect of target proximity on distractor suppression did not vary with target-distractor similarity and therefore may be an endogenous property of the selection process.

  8. The guidance of spatial attention during visual search for color combinations and color configurations.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-09-01

    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioral and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of 2 colors or by a specific spatial configuration of these colors. Target displays were preceded by spatially uninformative cue displays that contained items in 1 or both target-defining colors. Experiments 1 and 2 demonstrated that, during search for color combinations, attention is initially allocated independently and in parallel to all objects with target-matching colors, but is then rapidly withdrawn from objects that only have 1 of the 2 target colors. In Experiment 3, targets were defined by a particular spatial configuration of 2 colors, and could be accompanied by nontarget objects with a different configuration of the same colors. Attentional guidance processes were unable to distinguish between these 2 types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features. (PsycINFO Database Record

  9. Tracking target and distractor processing in fixed-feature visual search: evidence from human electrophysiology.

    PubMed

    Jannati, Ali; Gaspar, John M; McDonald, John J

    2013-12-01

    Salient distractors delay visual search for less salient targets in additional-singleton tasks, even when the features of the stimuli are fixed across trials. According to the salience-driven selection hypothesis, this delay is due to an initial attentional deployment to the distractor. Recent event-related potential (ERP) studies found no evidence for salience-driven selection in fixed-feature search, but the methods employed were not optimized to isolate distractor ERP components such as the N2pc and distractor positivity (PD; indices of selection and suppression, respectively). Here, we isolated target and distractor ERPs in two fixed-feature search experiments. Participants searched for a shape singleton in the presence of a more-salient color singleton (Experiment 1) or for a color singleton in the presence of a less-salient shape singleton (Experiment 2). The salient distractor did not elicit an N2pc, but it did elicit a PD on fast-response trials. Furthermore, distractors had no effect on the timing of the target N2pc. These results indicate that (a) the distractor was prevented from engaging the attentional mechanism associated with N2pc, (b) the distractor did not interrupt the deployment of attention to the target, and (c) competition for attention can be resolved by suppressing locations of irrelevant items on a salience-based priority map.

  10. Hippocampal gamma-band Synchrony and pupillary responses index memory during visual search.

    PubMed

    Montefusco-Siegmund, Rodrigo; Leonard, Timothy K; Hoffman, Kari L

    2017-04-01

    Memory for scenes is supported by the hippocampus, among other interconnected structures, but the neural mechanisms related to this process are not well understood. To assess the role of the hippocampus in memory-guided scene search, we recorded local field potentials and multiunit activity from the hippocampus of macaques as they performed goal-directed search tasks using natural scenes. We additionally measured pupil size during scene presentation, which in humans is modulated by recognition memory. We found that both pupil dilation and search efficiency accompanied scene repetition, thereby indicating memory for scenes. Neural correlates included a brief increase in hippocampal multiunit activity and a sustained synchronization of unit activity to gamma band oscillations (50-70 Hz). The repetition effects on hippocampal gamma synchronization occurred when pupils were most dilated, suggesting an interaction between aroused, attentive processing and hippocampal correlates of recognition memory. These results suggest that the hippocampus may support memory-guided visual search through enhanced local gamma synchrony. © 2016 Wiley Periodicals, Inc.

  11. Geometrical computations explain projection patterns of long-range horizontal connections in visual cortex.

    PubMed

    Ben-Shahar, Ohad; Zucker, Steven

    2004-03-01

    Neurons in primary visual cortex respond selectively to oriented stimuli such as edges and lines. The long-range horizontal connections between them are thought to facilitate contour integration. While many physiological and psychophysical findings suggest that collinear or association field models of good continuation dictate particular projection patterns of horizontal connections to guide this integration process, significant evidence of interactions inconsistent with these hypotheses is accumulating. We first show that natural random variations around the collinear and association field models cannot account for these inconsistencies, a fact that motivates the search for more principled explanations. We then develop a model of long-range projection fields that formalizes good continuation based on differential geometry. The analysis implicates curvature(s) in a fundamental way, and the resulting model explains both consistent data and apparent outliers. It quantitatively predicts the (typically ignored) spread in projection distribution, its nonmonotonic variance, and the differences found among individual neurons. Surprisingly, and for the first time, this model also indicates that texture (and shading) continuation can serve as alternative and complementary functional explanations to contour integration. Because current anatomical data support both (curve and texture) integration models equally and because both are important computationally, new testable predictions are derived to allow their differentiation and identification.

  12. Change They Can't Find: Change Blindness in Chimpanzees during a Visual Search Task

    PubMed Central

    2015-01-01

    Although considerable advances have been made in the study of change blindness in humans, research regarding change blindness in nonhuman animals has been rare thus far. Indeed, we do not know whether chimpanzees, our closest evolutionary relatives, experience difficulty detecting changes in a stimulus when presentations are separated by blank displays. This study demonstrated that chimpanzees showed severe difficulties in detecting changes in a flicker-type visual search task, and these results are discussed in relation to the adaptive significance of change detection (e.g. the relationship between change blindness and vigilance behaviour).

  13. Mottle camouflage patterns in cuttlefish: quantitative characterization and visual background stimuli that evoke them.

    PubMed

    Chiao, Chuan-Chin; Chubb, Charles; Buresch, Kendra C; Barbosa, Alexandra; Allen, Justine J; Mäthger, Lydia M; Hanlon, Roger T

    2010-01-15

    Cuttlefish and other cephalopods achieve dynamic background matching with two general classes of body patterns: uniform (or uniformly stippled) patterns and mottle patterns. Both pattern types have been described chiefly by the size scale and contrast of their skin components. Mottle body patterns in cephalopods have been characterized previously as small-to-moderate-scale light and dark skin patches (i.e. mottles) distributed somewhat evenly across the body surface. Here we move beyond this commonly accepted qualitative description by quantitatively measuring the scale and contrast of mottled skin components and relating these statistics to specific visual background stimuli (psychophysics approach) that evoke this type of background-matching pattern. Cuttlefish were tested on artificial and natural substrates to experimentally determine some primary visual background cues that evoke mottle patterns. Randomly distributed small-scale light and dark objects (or with some repetition of small-scale shapes/sizes) on a lighter substrate with moderate contrast are essential visual cues to elicit mottle camouflage patterns in cuttlefish. Lowering the mean luminance of the substrate without changing its spatial properties can modulate the mottle pattern toward disruptive patterns, which are of larger scale, different shape and higher contrast. Backgrounds throughout nature consist of a continuous range of spatial scales; backgrounds with medium-sized light/dark patches of moderate contrast are those in which cuttlefish Mottle patterns appear to be the most frequently observed.

  14. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-05

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.

  15. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  16. Distinct, but top-down modulable color and positional priming mechanisms in visual pop-out search.

    PubMed

    Geyer, Thomas; Müller, Hermann J

    2009-03-01

    Three experiments examined reaction time (RT) performance in visual pop-out search. Search displays comprised of one color target and two distractors which were presented at 24 possible locations on a circular ellipse. Experiment 1 showed that re-presentation of the target at a previous target location led to expedited RTs, whereas presentation of the target at a distractor location led to slowed RTs (relative to target presentation at a previous empty location). RTs were also faster when the color of the target was the same across consecutive trials, relative to a change of the target's color. This color priming was independent of the positional priming. Experiment 2 revealed larger positional facilitation, relative to Experiment 1, when position repetitions occurred more likely than chance level; analogously, Experiment 3 revealed stronger color priming effects when target color repetitions were more likely. These position and color manipulations did not change the pattern of color (Experiment 2) and positional priming effects (Experiment 3). While these results support the independency of color and positional priming effects (e.g., Maljkovic and Nakayama in Percept Psychophys 58:977-991, 1996), they also show that these (largely 'automatic') effects are top-down modulable when target position and color are predictable (e.g., Müller et al. in Vis Cogn 11:577-602, 2004).

  17. Visualization of gunshot residue patterns on dark clothing.

    PubMed

    Atwater, Christina S; Durina, Marie E; Durina, John P; Blackledge, Robert D

    2006-09-01

    Determination of the muzzle-to-target distance is often a critical factor in criminal and civil investigations involving firearms. However, seeing and recording gunshot residue patterns can be difficult if the victim's clothing is dark and/or bloodstained. Trostle reported the use of infrared film for the detection of burn patterns. However, only after the film is developed are the results visible and multiple exposures at different settings may be needed. The Video Spectral Comparator 2000 (Foster & Freeman Ltd., Evesham, Worcestershire, U.K.) is an imaging instrument routinely used by forensic document examiners. Without use of specialized film could the VSC 2000 (at appropriate instrument settings) quickly, easily, and reliably provide instantaneous viewing, saving, and printing of gunshot residue patterns on dark and/or blood soaked clothing? At muzzle-to-target distances of 6, 12, and 18 in., test fires were made into five different types of dark clothing using eight different handguns of different calibers. Gunshot residues were detected for all eight calibers, and powder burn patterns were seen on dark clothing for all three target distances and calibers except 0.22 long rifle and 0.25 ACP. Bloodstains did not preclude the viewing of these patterns.

  18. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children

    PubMed Central

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  19. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  20. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.

  1. Light Propagation and Visual Patterns: Preinstruction Learners' Conceptions.

    ERIC Educational Resources Information Center

    Langley, Dorothy; And Others

    1997-01-01

    Investigates the conceptions and representations of light propagation, image formation, and sight typical to preinstruction learners (N=139). Findings indicate that preinstructional students display some familiarity with optical systems, light propagation, and illumination patterns and have not developed a consistent descriptive and explanatory…

  2. The NLP Swish Pattern: An Innovative Visualizing Technique.

    ERIC Educational Resources Information Center

    Masters, Betsy J.; And Others

    1991-01-01

    Describes swish pattern, one of many innovative therapeutic interventions that developers of neurolinguistic programing (NLP) have contributed to counseling profession. Presents brief overview of NLP followed by an explanation of the basic theory and expected outcomes of the swish. Presents description of the intervention process and case studies…

  3. Distinct eye movement patterns enhance dynamic visual acuity

    PubMed Central

    Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157

  4. Looking back at the stare-in-the-crowd effect: staring eyes do not capture attention in visual search.

    PubMed

    Cooper, Robbie M; Law, Anna S; Langton, Stephen R H

    2013-05-17

    The stare-in-the crowd effect refers to the finding that a visual search for a target of staring eyes among averted-eyes distracters is more efficient than the search for an averted-eyes target among staring distracters. This finding could indicate that staring eyes are prioritized in the processing of the search array so that attention is more likely to be directed to their location than to any other. However, visual search is a complex process, which not only depends upon the properties of the target, but also the similarity between the target of the search and the distractor items and between the distractor items themselves. Across five experiments, we show that the search asymmetry diagnostic of the stare-in-the-crowd effect is more likely to be the result of a failure to control for the similarity among distracting items between the two critical search conditions rather than any special attention-grabbing property of staring gazes. Our results suggest that, contrary to results reported in the literature, staring gazes are not prioritized by attention in visual search.

  5. Reaction times in visual search can be explained by a simple model of neural synchronization.

    PubMed

    Kazanovich, Yakov; Borisyuk, Roman

    2017-03-01

    We present an oscillatory neural network model that can account for reaction times in visual search experiments. The model consists of a central oscillator that represents the central executive of the attention system and a number of peripheral oscillators that represent objects in the display. The oscillators are described as generalized Kuramoto type oscillators with adapted parameters. An object is considered as being included in the focus of attention if the oscillator associated with this object is in-phase with the central oscillator. The probability for an object to be included in the focus of attention is determined by its saliency that is described in formal terms as the strength of the connection from the peripheral oscillator to the central oscillator. By computer simulations it is shown that the model can reproduce reaction times in visual search tasks of various complexities. The dependence of the reaction time on the number of items in the display is represented by linear functions of different steepness which is in agreement with biological evidence.

  6. Long-term retention of skilled visual search following severe traumatic brain injury

    PubMed Central

    PAVAWALLA, SHITAL P.; SCHMITTER-EDGECOMBE, MAUREEN

    2007-01-01

    We examined the long-term retention of a learned automatic cognitive process in 17 severe TBI participants and 10 controls. Participants had initially received extensive consistent-mapping (CM) training (i.e., 3600 trials) in a semantic category visual search task (Schmitter-Edgecombe & Beglinger, 2001). Following CM training, TBI and control groups demonstrated dramatic performance improvements and the development of an automatic attention response (AAR), indicating task-specific and stimulus-specific skill learning. After a 5- or 10-month retention interval, participants in this study performed a New CM task and the originally trained CM task to assess for retention of task-specific and stimulus-specific visual search skills, respectively. No significant group differences were found in the level of retention for either skill type, indicating that individuals with severe TBI were able to retain the learned skills over a long-term retention interval at a level comparable to controls. Exploratory analyses revealed that TBI participants who returned at the 5-month retention interval showed nearly complete skill retention, and greater skill retention than TBI participants who returned at the 10-month interval, suggesting that “booster” or retraining sessions may be needed when a skill is not continuously in use. PMID:17064444

  7. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    PubMed

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  8. Beneficial effects of the NMDA antagonist ketamine on decision processes in visual search.

    PubMed

    Shen, Kelly; Kalwarowsky, Sarah; Clarence, Wendy; Brunamonti, Emiliano; Paré, Martin

    2010-07-21

    The ability of sensory-motor circuits to integrate sensory evidence over time is thought to underlie the process of decision-making in perceptual discrimination. Recent work has suggested that the NMDA receptor contributes to mediating neural activity integration. To test this hypothesis, we trained three female rhesus monkeys (Macaca mulatta) to perform a visual search task, in which they had to make a saccadic eye movement to the location of a target stimulus presented among distracter stimuli of lower luminance. We manipulated NMDA-receptor function by administering an intramuscular injection of the noncompetitive NMDA antagonist ketamine and assessed visual search performance before and after manipulation. Ketamine was found to lengthen response latency in a dose-dependent fashion. Surprisingly, it was also observed that response accuracy was significantly improved when lower doses were administered. These findings suggest that NMDA receptors play a crucial role in the process of decision-making in perceptual discrimination. They also further support the idea that multiple neural representations compete with one another through mutual inhibition, which may explain the speed-accuracy trade-off rule that shapes discrimination behavior: lengthening integration time helps resolve small differences between choice alternatives, thereby improving accuracy.

  9. Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search.

    PubMed

    Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

    2014-10-01

    People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject

  10. Gene Expression Browser: large-scale and cross-experiment microarray data integration, management, search & visualization

    PubMed Central

    2010-01-01

    Background In the last decade, a large amount of microarray gene expression data has been accumulated in public repositories. Integrating and analyzing high-throughput gene expression data have become key activities for exploring gene functions, gene networks and biological pathways. Effectively utilizing these invaluable microarray data remains challenging due to a lack of powerful tools to integrate large-scale gene-expression information across diverse experiments and to search and visualize a large number of gene-expression data points. Results Gene Expression Browser is a microarray data integration, management and processing system with web-based search and visualization functions. An innovative method has been developed to define a treatment over a control for every microarray experiment to standardize and make microarray data from different experiments homogeneous. In the browser, data are pre-processed offline and the resulting data points are visualized online with a 2-layer dynamic web display. Users can view all treatments over control that affect the expression of a selected gene via Gene View, and view all genes that change in a selected treatment over control via treatment over control View. Users can also check the changes of expression profiles of a set of either the treatments over control or genes via Slide View. In addition, the relationships between genes and treatments over control are computed according to gene expression ratio and are shown as co-responsive genes and co-regulation treatments over control. Conclusion Gene Expression Browser is composed of a set of software tools, including a data extraction tool, a microarray data-management system, a data-annotation tool, a microarray data-processing pipeline, and a data search & visualization tool. The browser is deployed as a free public web service (http://www.ExpressionBrowser.com) that integrates 301 ATH1 gene microarray experiments from public data repositories (viz. the Gene

  11. Automated numerical simulation of biological pattern formation based on visual feedback simulation framework

    PubMed Central

    Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin

    2017-01-01

    There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811

  12. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  13. Pattern drilling exploration: Optimum pattern types and hole spacings when searching for elliptical shaped targets

    USGS Publications Warehouse

    Drew, L.J.

    1979-01-01

    In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.

  14. 3D Display Calibration by Visual Pattern Analysis.

    PubMed

    Hwang, Hyoseok; Chang, Hyun Sung; Nam, Dongkyung; Kweon, In So

    2017-02-06

    Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2 s for computation; and is robust to noise, working well in the SNR regime as low as 6 dB.

  15. Frontoparietal activation during visual conjunction search: Effects of bottom-up guidance and adult age.

    PubMed

    Madden, David J; Parks, Emily L; Tallman, Catherine W; Boylan, Maria A; Hoagey, David A; Cocjin, Sally B; Johnson, Micah A; Chou, Ying-Hui; Potter, Guy G; Chen, Nan-Kuei; Packard, Lauren E; Siciliano, Rachel E; Monge, Zachary A; Diaz, Michele T

    2017-04-01

    We conducted functional magnetic resonance imaging (fMRI) with a visual search paradigm to test the hypothesis that aging is associated with increased frontoparietal involvement in both target detection and bottom-up attentional guidance (featural salience). Participants were 68 healthy adults, distributed continuously across 19 to 78 years of age. Frontoparietal regions of interest (ROIs) were defined from resting-state scans obtained prior to task-related fMRI. The search target was defined by a conjunction of color and orientation. Each display contained one item that was larger than the others (i.e., a size singleton) but was not informative regarding target identity. Analyses of search reaction time (RT) indicated that bottom-up attentional guidance from the size singleton (when coincident with the target) was relatively constant as a function of age. Frontoparietal fMRI activation related to target detection was constant as a function of age, as was the reduction in activation associated with salient targets. However, for individuals 35 years of age and older, engagement of the left frontal eye field (FEF) in bottom-up guidance was more prominent than for younger individuals. Further, the age-related differences in left FEF activation were a consequence of decreasing resting-state functional connectivity in visual sensory regions. These findings indicate that age-related compensatory effects may be expressed in the relation between activation and behavior, rather than in the magnitude of activation, and that relevant changes in the activation-RT relation may begin at a relatively early point in adulthood. Hum Brain Mapp 38:2128-2149, 2017. © 2017 Wiley Periodicals, Inc.

  16. Training eye movements for visual search in individuals with macular degeneration

    PubMed Central

    Janssen, Christian P.; Verghese, Preeti

    2016-01-01

    We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. PMID:28027382

  17. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  18. Neural correlates of age-related visual search decline: a combined ERP and sLORETA study.

    PubMed

    Lorenzo-López, Laura; Amenedo, Elena; Pascual-Marqui, Roberto D; Cadaveira, Fernando

    2008-06-01

    Differences in the neural systems underlying visual search processes for young (n=17, mean age 19.6+/-1.9) and older (n=22, mean age 68.5+/-6) subjects were investigated combining the Event-Related Potential (ERP) technique with standardized Low-Resolution brain Electromagnetic Tomography (sLORETA) analyses. Behavioral results showed an increase in mean reaction times (RTs) and a reduction in hit rates with age. The ERPs were significantly different between young and older subjects at the P3 component, showing longer latencies and lower amplitudes in older subjects. These ERP results suggest an age-related decline in the intensity and speed of visual processing during visual search that imply a reduction in attentional resources with normal aging. The sLORETA data revealed a significantly reduced neural differentiation in older subjects, who recruited bilateral prefrontal regions in a nonselective manner for the different search arrays. Finally, sLORETA between-group comparisons revealed that relative to young subjects, older subjects showed significantly reduced activity in anterior cingulate cortex as well as in numerous limbic and occipitotemporal regions contributing to visual search processes. These findings provide evidence that the neural circuit supporting this cognitive process is vulnerable to normal aging. All these attentional factors could contribute to poorer performance of older compared to young subjects in visual search tasks.

  19. Visual search is postponed during the period of the AB: An event-related potential study.

    PubMed

    Lagroix, Hayley E P; Grubert, Anna; Spalek, Thomas M; Di Lollo, Vincent; Eimer, Martin

    2015-08-01

    In the phenomenon known as the attentional blink (AB), perception of the second of two rapidly sequential targets (T2) is impaired when presented shortly after the first (T1). Studies in which T2 consisted of a pop-out search array provided evidence suggesting that visual search is postponed during the AB. In the present work, we used behavioral and electrophysiological measures to test this postponement hypothesis. The behavioral measure was reaction time (RT) to T2; the electrophysiological measure was the onset latency of an ERP index of attentional selection, known as the N2pc. Consistent with the postponement hypothesis, both measures were delayed during the AB. The delay in N2pc was substantially shorter than that in RT, pointing to multiple sources of delay in the chain of processing events, as distinct from the single source postulated in current theories of the AB. Finally, the finding that the N2pc was delayed during the AB strongly suggests that attention is involved in the processing of pop-out search arrays.

  20. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  1. Effect of skill level on recall of visually presented patterns of musical notes.

    PubMed

    Kalakoski, Virpi

    2007-04-01

    Expertise effects in music were studied in a new task: the construction of mental representations from separate fragments. Groups of expert musicians and non-musicians were asked to recall note patterns presented visually note by note. Skill-level, musical well-formedness of the note patterns and presentation mode were varied. The musicians recalled note patterns better than the non-musicians, even though the presentation was visual and successive. Furthermore, only musicians' performance was affected by musical well-formedness of the note patterns when visual gestalt properties, verbal rehearsability, and familiarity of the stimuli were controlled. Musicians were also able to use letter names referring to notes as efficiently as visual notes, which indicates that the better recall of musicians cannot be explained by perceptual visual chunking. These results and the effect of skill level on the distribution of recall errors indicate that the ability to chunk incoming information into meaningful units does not require that complete familiar patterns are accessible to encoding processes, yet previous knowledge stored in long-term memory affects representation construction in working memory. The present method offers a new reliable tool, and its implications to the research on construction of representations and musical imagery are discussed.

  2. A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search.

    PubMed

    Adeli, Hossein; Vitu, Françoise; Zelinsky, Gregory J

    2017-02-08

    Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts.SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually

  3. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  4. Dynamics of target and distractor processing in visual search: evidence from event-related brain potentials.

    PubMed

    Hilimire, Matthew R; Mounts, Jeffrey R W; Parks, Nathan A; Corballis, Paul M

    2011-05-20

    When multiple objects are present in a visual scene, salient and behaviorally relevant objects are attentionally selected and receive enhanced processing at the expense of less salient or less relevant objects. Here we examined three lateralized components of the event-related potential (ERP) - the N2pc, Ptc, and SPCN - as indices of target and distractor processing in a visual search paradigm. Participants responded to the orientation of a target while ignoring an attentionally salient distractor and ERPs elicited by the target and the distractor were obtained. Results indicate that both the target and the distractor elicit an N2pc component which may index the initial attentional selection of both objects. In contrast, only the distractor elicited a significant Ptc, which may reflect the subsequent suppression of distracting or irrelevant information. Thus, the Ptc component appears to be similar to another ERP component - the Pd - which is also thought to reflect distractor suppression. Furthermore, only the target elicited an SPCN component which likely reflects the representation of the target in visual short term memory.

  5. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data.

  6. Measurement and visualization of three-dimensional directivity pattern

    NASA Astrophysics Data System (ADS)

    Arndt, Georg-Erwin; Gebert, Anton; Klemenz, Harald; Ritter, Hartmut C.

    2005-09-01

    In order to optimize a new second-order multimicrophone technology for a KEMAR dummy head, a three-dimensional directivity measurement setup was developed. To minimize mechanical mass and to reduce total measurement time a C-Bow setup was used, containing 18 calibrated loudspeakers. Those small tweeters identical in construction are placed in every 10 deg of elevation in a semicircular arc of 2-m diameter. The only moving part of this setup is a full-circle rotating KEMAR. The ANSI Standard 3.35 for directional measurement is fully supported and the required 48 measuring points are completed in less than 3 min. Using this fast and simple setup, the various responses attained from different latitudes need to be weighted to calculate a three-dimensional directivity value. Utilizing an equally distributed number, for example 400 measuring points easily executable with this setup, weighting can be omitted and a three-dimensional plot with high resolution can be visualized. Additionally, two-dimensional cuts of different planes in horizontal, vertical, and sagittal direction can be displayed. Data of unaided KEMAR, as well as data from the hearing aid used during those measurements, are presented and discussed.

  7. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  8. Foraging flexibility and search patterns are unlinked during breeding in a free-ranging seabird.

    PubMed

    Shoji, Akiko; Aris-Brosou, Stéphane; Owen, Ellie; Bolton, Mark; Boyle, Dave; Fayet, Annette; Dean, Ben; Kirk, Holly; Freeman, Robin; Perrins, Chris; Guilford, Tim

    In order to maximize foraging efficiency in a varying environment, predators are expected to optimize their search strategy. Environmental conditions are one important factor affecting these movement patterns, but variations in breeding constraints (self-feeding vs. feeding young and self-feeding) during different breeding stages (incubation vs. chick-rearing) are often overlooked, so that the mechanisms responsible for such behavioral shifts are still unknown. Here, to test how search patterns are affected at different breeding stages and to explore the proximate causes of these variations, we deployed data loggers to record both position (global positioning system) and dive activity (time-depth recorders) of a colonial breeding seabird, the razorbill Alca torda. Over a period of 3 years, our recordings of 56 foraging trips from 18 breeders show that while there is no evidence for individual route fidelity, razorbills exhibit higher foraging flexibility during incubation than during chick rearing, when foraging becomes more focused on an area of high primary productivity. We further show that this behavioral shift is not due to a shift in search patterns, as reorientations during foraging are independent of breeding stage. Our results suggest that foraging flexibility and search patterns are unlinked, perhaps because birds can read cues from their environment, including conspecifics, to optimize their foraging efficiency.

  9. Visual-search observers for assessing tomographic x-ray image quality

    PubMed Central

    Gifford, Howard C.; Liang, Zhihua; Das, Mini

    2016-01-01

    Purpose: Mathematical model observers commonly used for diagnostic image-quality assessments in x-ray imaging research are generally constrained to relatively simple detection tasks due to their need for statistical prior information. Visual-search (VS) model observers that employ morphological features in sequential search and analysis stages have less need for such information and fewer task constraints. The authors compared four VS observers against human observers and an existing scanning model observer in a pilot study that quantified how mass detection and localization in simulated digital breast tomosynthesis (DBT) can be affected by the number P of acquired projections. Methods: Digital breast phantoms with embedded spherical masses provided single-target cases for a localization receiver operating characteristic (LROC) study. DBT projection sets based on an acquisition arc of 60° were generated for values of P between 3 and 51. DBT volumes were reconstructed using filtered backprojection with a constant 3D Butterworth postfilter; extracted 2D slices were used as test images. Three imaging physicists participated as observers. A scanning channelized nonprewhitening (CNPW) observer had knowledge of the mean lesion-absent images. The VS observers computed an initial single-feature search statistic that identified candidate locations as local maxima of either a template matched-filter (MF) image or a gradient-template MF (GMF) image. Search inefficiencies that modified the statistic were also considered. Subsequent VS candidate analyses were carried out with (i) the CNPW statistical discriminant and (ii) the discriminant computed from GMF training images. These location-invariant discriminants did not utilize covariance information. All observers read 36 training images and 108 study images per P value. Performance was scored in terms of area under the LROC curve. Results: Average human-observer performance was stable for P between 7 and 35. In the absence of

  10. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy.

  11. Visualization of Dietary Patterns and Their Associations With Age-Related Macular Degeneration

    PubMed Central

    Chiu, Chung-Jung; Chang, Min-Lee; Li, Tricia; Gensler, Gary; Taylor, Allen

    2017-01-01

    Purpose We aimed to visualize the relationship of predominant dietary patterns and their associations with AMD. Methods A total of 8103 eyes from 4088 participants in the baseline Age-Related Eye Disease Study (AREDS) were classified into three groups: control (n = 2739), early AMD (n = 4599), and advanced AMD (n = 765). Using principle component analysis, two major dietary patterns and eight minor dietary patterns were characterized. Applying logistic regression in our analysis, we related dietary patterns to the prevalence of AMD. Qualitative comparative analysis by operating Boolean algebra and drawing Venn diagrams was used to visualize our findings. Results In general, the eight minor patterns were subsets or extensions of either one of the two major dietary patterns (Oriental and Western patterns) and consisted of fewer characteristic foods than the two major dietary patterns. Unlike the two major patterns, which were more strongly associated with both early and advanced AMD, none of the eight minors were associated with early AMD and only four minor patterns, including the Steak pattern (odds ratio comparing the highest to lowest quintile of the pattern score = 1.73 [95% confidence interval: 1.24 to 2.41; Ptrend = 0.02]), the Breakfast pattern (0.60 [0.44 to 0.82]; Ptrend = 0.004]), the Caribbean pattern (0.64 [0.47 to 0.89; Ptrend = 0.009]), and the Peanut pattern (0.64 [0.46 to 0.89; Ptrend = 0.03]), were significantly associated with advanced AMD. Our data also suggested several potential beneficial (peanuts, pizza, coffee, and tea) and harmful (salad dressing) foods for AMD. Conclusions Our data indicate that a diet of various healthy foods may be optimal for reducing AMD risk. The effects of some specific foods in the context of overall diet warrant further study. PMID:28253403

  12. Pattern recognition, attention, and information bottlenecks in the primate visual system

    NASA Astrophysics Data System (ADS)

    Van Essen, David; Olshausen, Bruno A.; Anderson, Clifford H.; Gallant, J. T.

    1991-07-01

    In its evolution, the primate visual system has developed impressive capabilities for recognizing complex patterns in natural images. This process involves many stages of analysis and a variety of information processing strategies. This paper concentrates on the importance of 'information bottlenecks,' which restrict the amount of information that can be handled at different stages of analysis. These steps are crucial for reducing the overwhelming computational complexity associated with recognizing countless objects from arbitrary viewing angles, distances, and perspectives. The process of directed visual attention is an especially important information bottleneck because of its flexibility in determining how information is routed to high-level pattern recognition centers.

  13. Visualizing Nanoscopic Topography and Patterns in Freely Standing Thin Films

    NASA Astrophysics Data System (ADS)

    Sharma, Vivek; Zhang, Yiran; Yilixiati, Subinuer

    Thin liquid films containing micelles, nanoparticles, polyelectrolyte-surfactant complexes and smectic liquid crystals undergo thinning in a discontinuous, step-wise fashion. The discontinuous jumps in thickness are often characterized by quantifying changes in the intensity of reflected monochromatic light, modulated by thin film interference from a region of interest. Stratifying thin films exhibit a mosaic pattern in reflected white light microscopy, attributed to the coexistence of domains with various thicknesses, separated by steps. Using Interferometry Digital Imaging Optical Microscopy (IDIOM) protocols developed in the course of this study, we spatially resolve for the first time, the landscape of stratifying freely standing thin films. We distinguish nanoscopic rims, mesas and craters, and follow their emergence and growth. In particular, for thin films containing micelles of sodium dodecyl sulfate (SDS), these topological features involve discontinuous, thickness transitions with concentration-dependent steps of 5-25 nm. These non-flat features result from oscillatory, periodic, supramolecular structural forces that arise in confined fluids, and arise due to complex coupling of hydrodynamic and thermodynamic effects at the nanoscale.

  14. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention.

  15. A Visualization System for Space-Time and Multivariate Patterns (VIS-STAMP)

    PubMed Central

    Guo, Diansheng; Chen, Jin; MacEachren, Alan M.; Liao, Ke

    2011-01-01

    The research reported here integrates computational, visual, and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial, and temporal dimensions via clustering, sorting, and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display, and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and, through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 Contest data set, which contains time-varying, geographically referenced, and multivariate data for technology companies in the US. PMID:17073369

  16. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    PubMed

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals.

  17. Patterned-String Tasks: Relation between Fine Motor Skills and Visual-Spatial Abilities in Parrots

    PubMed Central

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

  18. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data

    PubMed Central

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  19. The Autism-Spectrum Quotient and Visual Search: Shallow and Deep Autistic Endophenotypes.

    PubMed

    Gregory, B L; Plaisted-Grant, K C

    2016-05-01

    A high Autism-Spectrum Quotient (AQ) score (Baron-Cohen et al. in J Autism Dev Disord 31(1):5-17, 2001) is increasingly used as a proxy in empirical studies of perceptual mechanisms in autism. Several investigations have assessed perception in non-autistic people measured for AQ, claiming the same relationship exists between performance on perceptual tasks in high-AQ individuals as observed in autism. We question whether the similarity in performance by high-AQ individuals and autistics reflects the same underlying perceptual cause in the context of two visual search tasks administered to a large sample of typical individuals assessed for AQ. Our results indicate otherwise and that deploying the AQ as a proxy for autism introduces unsubstantiated assumptions about high-AQ individuals, the endophenotypes they express, and their relationship to Autistic Spectrum Conditions (ASC) individuals.

  20. HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

    2013-12-01

    Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds

  1. Age-related changes in selective attention and perceptual load during visual search.

    PubMed

    Madden, David J; Langley, Linda K

    2003-03-01

    Three visual search experiments were conducted to test the hypothesis that age differences in selective attention vary as a function of perceptual load (E. A. Maylor & N. Lavie, 1998). Under resource-limited conditions (Experiments 1 and 2), the distraction from irrelevant display items generally decreased as display size (perceptual load) increased. This perceptual load effect was similar for younger and older adults, contrary to the findings of Maylor and Lavie. Distraction at low perceptual loads appeared to reflect both general and specific inhibitory mechanisms. Under more data-limited conditions (Experiment 3), an age-related decline in selective attention was evident, but the age difference was not attributable to capacity limitations as predicted by the perceptual load theory.

  2. Probability cueing influences miss rate and decision criterion in visual searches

    PubMed Central

    Ishibashi, Kazuya; Kita, Shinichi

    2014-01-01

    In visual search tasks, the ratio of target-present to target-absent trials has an important effect on miss rates. The low prevalence effect indicates that we are more likely to miss a target when it occurs rarely rather than frequently. In this study, we examined whether probability cueing modulates the miss rate and the observer's criterion. The results indicated that probability cueing affects miss rates, the average observer's criterion, and reaction time for target-absent trials. These results clearly demonstrate that probability cueing modulates two parameters (i.e., the decision criterion and the quitting threshold) and produces a low prevalence effect. Taken together, the current study and previous studies suggest that the miss rate is not just affected by global prevalence; it is also affected by probability cueing. PMID:25469223

  3. Simultaneous tDCS-fMRI Identifies Resting State Networks Correlated with Visual Search Enhancement

    PubMed Central

    Callan, Daniel E.; Falcone, Brian; Wada, Atsushi; Parasuraman, Raja

    2016-01-01

    This study uses simultaneous transcranial direct current stimulation (tDCS) and functional MRI (fMRI) to investigate tDCS modulation of resting state activity and connectivity that underlies enhancement in behavioral performance. The experiment consisted of three sessions within the fMRI scanner in which participants conducted a visual search task: Session 1: Pre-training (no performance feedback), Session 2: Training (performance feedback given), Session 3: Post-training (no performance feedback). Resting state activity was recorded during the last 5 min of each session. During the 2nd session one group of participants underwent 1 mA tDCS stimulation and another underwent sham stimulation over the right posterior parietal cortex. Resting state spontaneous activity, as measured by fractional amplitude of low frequency fluctuations (fALFF), for session 2 showed significant differences between the tDCS stim and sham groups in the precuneus. Resting state functional connectivity from the precuneus to the substantia nigra, a subcortical dopaminergic region, was found to correlate with future improvement in visual search task performance for the stim over the sham group during active stimulation in session 2. The after-effect of stimulation on resting state functional connectivity was measured following a post-training experimental session (session 3). The left cerebellum Lobule VIIa Crus I showed performance related enhancement in resting state functional connectivity for the tDCS stim over the sham group. The ability to determine the relationship that the relative strength of resting state functional connectivity for an individual undergoing tDCS has on future enhancement in behavioral performance has wide ranging implications for neuroergonomic as well as therapeutic, and rehabilitative applications. PMID:27014014

  4. The effects of the binocular disparity differences between targets and maskers on visual search.

    PubMed

    Gao, Ya-Yue; Schneider, Bruce; Li, Liang

    2017-02-01

    A visual search for targets is facilitated when the target objects are on a different depth plane than other masking objects cluttering the scene. The ability of observers to determine whether one of four letters presented stereoscopically at four symmetrically located positions on the fixation plane differed from the other three was assessed when the target letters were masked by other randomly positioned and oriented letters appearing on the same depth plane as the target letters, or in front, or behind it. Three additional control maskers, derived from the letter maskers, were also presented on the same three depth planes: (1) random-phase maskers (same spectral amplitude composition as the letter masker but with the phase spectrum randomized); (2) random-pixel maskers (the locations of the letter maskers' pixel amplitudes were randomized); (3) letter-fragment maskers (the same letters as in the letter masker but broken up into fragments). Performance improved with target duration when the target-letter plane was in front of the letter-masker plane, but not when the target letters were on the same plane as the masker, or behind it. A comparison of the results for the four different kinds of maskers indicated that maskers consisting of recognizable objects (letters or letter fragments) interfere more with search and comparison judgments than do visual noise maskers having the same spatial frequency profile and contrast. In addition, performance was poorer for letter maskers than for letter-masker fragments, suggesting that the letter maskers interfered more with performance than the letter-fragment maskers because of the lexical activity they elicit.

  5. Hotspot sequential pattern visualization in peatland of Sumatera and Kalimantan using shiny framework

    NASA Astrophysics Data System (ADS)

    Abriantini, G.; Sitanggang, I. S.; Trisminingsih, R.

    2017-01-01

    Fires on peatland frequently occurred in Sumatra and Kalimantan. Fires on peatland can be identified by hotspot sequential patterns. Sequential pattern mining is one of data mining techniques that can be used to analyse hotspot sequential patterns. Sequential pattern discovery equivalent classes (SPADE) algorithm can be applied to extract hotspot sequential patterns. The objectives of this work are: 1) to obtain hotspot sequential pattern in Sumatra and Kalimantan in 2014 and 2015, and 2) to develop a web based application using Shiny framework that is available in R package for hotspot sequential pattern visualization in peatland of Sumatra and Kalimantan. Hotspot sequential patterns were obtained using minimum support of 0.01 with the focus of analysis is the hotspot sequences with length two or more events. This work generated as many 89 sequences with length 2 or more in Sumatra in 2014, 147 sequences in Sumatra in 2015, 48 sequences in Kalimantan in 2014, and 51 sequences in Kalimantan in 2015. Hotspot sequential patterns are visualized based on peatland’s characteristics, weather, and social economy. The features in this web based application have been tested and the results show that all features work properly according to the test scenario.

  6. Visual illusions in predator-prey interactions: birds find moving patterned prey harder to catch.

    PubMed

    Hämäläinen, Liisa; Valkonen, Janne; Mappes, Johanna; Rojas, Bibiana

    2015-09-01

    Several antipredator strategies are related to prey colouration. Some colour patterns can create visual illusions during movement (such as motion dazzle), making it difficult for a predator to capture moving prey successfully. Experimental evidence about motion dazzle, however, is still very scarce and comes only from studies using human predators capturing moving prey items in computer games. We tested a motion dazzle effect using for the first time natural predators (wild great tits, Parus major). We used artificial prey items bearing three different colour patterns: uniform brown (control), black with elongated yellow pattern and black with interrupted yellow pattern. The last two resembled colour patterns of the aposematic, polymorphic dart-poison frog Dendrobates tinctorius. We specifically tested whether an elongated colour pattern could create visual illusions when combined with straight movement. Our results, however, do not support this hypothesis. We found no differences in the number of successful attacks towards prey items with different patterns (elongated/interrupted) moving linearly. Nevertheless, both prey types were significantly more difficult to catch compared to the uniform brown prey, indicating that both colour patterns could provide some benefit for a moving individual. Surprisingly, no effect of background (complex vs. plain) was found. This is the first experiment with moving prey showing that some colour patterns can affect avian predators' ability to capture moving prey, but the mechanisms lowering the capture rate are still poorly understood.

  7. Visual search, movement behaviour and boat control during the windward mark rounding in sailing.

    PubMed

    Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Savelsbergh, Geert J P

    2015-01-01

    In search of key-performance predictors in sailing, we examined to what degree visual search, movement behaviour and boat control contribute to skilled performance while rounding the windward mark. To this end, we analysed 62 windward mark roundings sailed without opponents and 40 windward mark roundings sailed with opponents while competing in small regattas. Across conditions, results revealed that better performances were related to gazing more to the tangent point during the actual rounding. More specifically, in the condition without opponents, skilled performance was associated with gazing more outside the dinghy during the actual rounding, while in the condition with opponents, superior performance was related to gazing less outside the dinghy. With respect to movement behaviour, superior performance was associated with the release of the trimming lines close to rounding the mark. In addition, better performances were related to approaching the mark with little heel, yet heeling the boat more to the windward side when being close to the mark. Potential implications for practice are suggested for each phase of the windward mark rounding.

  8. Long-term priming of visual search prevails against the passage of time and counteracting instructions.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-08-01

    Studies on intertrial priming have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change-so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of 2 possible target colors is more frequent during an experiment block, this results in a prolonged and persistent facilitation for the color that was biased, long after the frequency bias is gone-so-called long-term priming. In this study, we explore the robustness of such long-term priming. In Experiment 1, participants were fully informed of the bias and instructed to prioritize the other unbiased color. Despite these instructions, long-term priming of the biased color persisted in this block, suggesting that guidance by long-term priming is an implicit effect. In Experiment 2, long-term priming was built up in 1 experimental session and was then assessed in a second session a week later. Long-term priming persisted across this week, emphasizing that long-term priming is truly a phenomenon of long-term memory. The results support the view that priming results from the automatic and implicit retrieval of memory traces of past trials. (PsycINFO Database Record

  9. Previously seen and expected stimuli elicit surprise in the context of visual search.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-04-01

    In the context of visual search, surprise is the phenomenon by which a previously unseen and unexpected stimulus exogenously attracts spatial attention. Capture by such a stimulus occurs, by definition, independent of task goals and is thought to be dependent on the extent to which the stimulus deviates from expectations. However, the relative contributions of prior-exposure and explicit knowledge of an unexpected event to the surprise response have not yet been systematically investigated. Here observers searched for a specific color while ignoring irrelevant cues of different colors presented prior to the target display. After a brief familiarization period, we presented an irrelevant motion cue to elicit surprise. Across conditions we varied prior exposure to the motion stimulus - seen versus unseen - and top-down expectations of occurrence - expected versus unexpected - to assess the extent to which each of these factors contributes to surprise. We found no attenuation of the surprise response when observers were pre-exposed to the motion cue and or had explicit knowledge of its occurrence. Our results show that it is neither sufficient nor necessary that a stimulus be new and unannounced to elicit surprise and suggest that the expectations that determine the surprise response are highly context specific.

  10. How You Move Is What I See: Planning an Action Biases a Partner's Visual Search.

    PubMed

    Dötsch, Dominik; Vesper, Cordula; Schubö, Anna

    2017-01-01

    Activating action representations can modulate perceptual processing of action-relevant dimensions, indicative of a common-coding of perception and action. When two or more agents work together in joint action, individual agents often need to consider not only their own actions and their effects on the world, but also predict the actions of a co-acting partner. If in these situations the action of a partner is represented in a functionally equivalent way to the agent's own actions, one may also expect interaction effects between action and perception across jointly acting individuals. The present study investigated whether the action of a co-acting partner may modulate an agent's perception. The "performer" prepared a grasping or pointing movement toward a physical target while the "searcher" performed a visual search task. The performer's planned action impaired the searcher's perceptual performance when the search target dimension was relevant to the performer's movement execution. These results demonstrate an action-induced modulation of perceptual processes across participants and indicate that agents represent their partner's action by employing the same perceptual system they use to represent an own action. We suggest that task representations in joint action operate along multiple levels of a cross-brain predictive coding system, which provides agents with information about a partner's actions when they coordinate to reach a common goal.

  11. Investigation of Attentional Bias in Obsessive Compulsive Disorder with and without Depression in Visual Search

    PubMed Central

    Morein-Zamir, Sharon; Papmeyer, Martina; Durieux, Alice; Fineberg, Naomi A.; Sahakian, Barbara J.; Robbins, Trevor W.

    2013-01-01

    Whether Obsessive Compulsive Disorder (OCD) is associated with an increased attentional bias to emotive stimuli remains controversial. Additionally, it is unclear whether comorbid depression modulates abnormal emotional processing in OCD. This study examined attentional bias to OC-relevant scenes using a visual search task. Controls, non-depressed and depressed OCD patients searched for their personally selected positive images amongst their negative distractors, and vice versa. Whilst the OCD groups were slower than healthy individuals in rating the images, there were no group differences in the magnitude of negative bias to concern-related scenes. A second experiment employing a common set of images replicated the results on an additional sample of OCD patients. Although there was a larger bias to negative OC-related images without pre-exposure overall, no group differences in attentional bias were observed. However, OCD patients subsequently rated the images more slowly and more negatively, again suggesting post-attentional processing abnormalities. The results argue against a robust attentional bias in OCD patients, regardless of their depression status and speak to generalized difficulties disengaging from negative valence stimuli. Rather, post-attentional processing abnormalities may account for differences in emotional processing in OCD. PMID:24260343

  12. The development of visual search in infancy: Attention to faces versus salience

    PubMed Central

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2015-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-month-old, 6-month-old, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and 8-month-old infants looked first and longest at faces; their looking was not strongly influenced by physical salience. In contrast, 4-month-old infants showed a visual preference for the face only when the arrays contained 2 items and the competitor was relatively low in salience. When the arrays contained many items or the only competitor was relatively high in salience, 4-month-old infants’ looks were more often directed at the most salient item. Thus, over ages of 4 to 8 months, physical salience has a decreasing influence and faces have an increasing influence on where and how long infants look. PMID:26866728

  13. Spatial Attention can Bias Search in Visual Short-Term Memory

    PubMed Central

    Nobre, Anna C.; Griffin, Ivan C.; Rao, Anling

    2007-01-01

    Whereas top-down attentional control is known to bias perceptual functions at many levels of stimulus analysis, its possible influence over memory-related functions remains uncharted. Our experiment combined behavioral measures and event-related potentials (ERPs) to test the ability of spatial orienting to bias functions associated with visual short-term memory (VSTM), and to shed light on the neural mechanisms involved. In particular, we investigated whether orienting attention to a spatial location within an array maintained in VSTM could facilitate the search for a specific remembered item. Participants viewed arrays of one, two or four differently colored items, followed by an informative spatial (100% valid) or uninformative neutral retro-cue (1500–2500 ms after the array), and later by a probe stimulus (500–1000 ms after the retro-cue). The task was to decide whether the probe stimulus had been present in the array. Behavioral results showed that spatial retro-cues improved both accuracy and response times for making decisions about the presence of the probe item in VSTM, and significantly attenuated performance decrements caused by increasing VSTM load. We also identified a novel ERP component (N3RS) specifically associated with searching for an item within VSTM. Paralleling the behavioral results, the amplitude and duration of the N3RS systematically increased with VSTM load in neutral retro-cue trials. When spatial retro-cues were provided, this “retro-search” component was absent. Our findings clearly show that the influence of top-down attentional biases extends to mnemonic functions, and, specifically, that searching for items within VSTM can be under flexible voluntary control. PMID:18958218

  14. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity.

  15. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes

    PubMed Central

    2012-01-01

    Background Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses) do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. Methods In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group). Their results were compared to a Control group (12 older drivers) who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. Results After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot). In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes). Conclusions These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs. PMID:22385499

  16. [The search for electrophysiological predictors of visual comfort after presbyopia correction with contact lenses].

    PubMed

    El Ameen, A; Majzoub, S; Pisella, P-J

    2017-03-24

    Starting at 40 years of age, prespyopia affects a quarter of the world population. Many techniques of presbyopia surgery have emerged in recent years. The purpose of this study was to compare monovision and multifocality and to identify clinical and electrophysiological predictive markers of visual comfort for each correction available in clinical practice. Ten presbyopic patients participated in this study. Patients received monovision and multifocal correction using contact lenses for three weeks each in a random order. A clinical evaluation (visual acuity, TNO test, binocular contrast sensitivity and quality of vision questionnaires) and an electrophysiological evaluation (monocular and binocular pattern VEP with multiple spatial frequencies: 60, 30 and 15') were performed before and after each correction modality. The P100 was significantly wider and slightly earlier after binocular compared to monocular stimulation at T0. The TNO stereopsis score decreased significantly after correction. No other significant differences, either on clinical or electrophysiological criteria, were found between the two modes of correction. Several significant correlations were found between the stereoacuity difference depending upon correction and evoked potentials by binocular pattern at T0. The larger the stereoacuity difference (better stereoacuity with multifocal compensation), the longer the latency of the P100 using 60' checks (R=0.82; P=0.004) and the greater the amplitude of the N75 using 30' (R=0.652; P=0.04). Our study found no differences between the 2 types of correction, but it highlights a benefit of VEP used in current practice and measurement of the P100 wave, the best indicator of stereopsis and the most consistent, to predict visual comfort after compensation presbyopia.

  17. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence (Journal of Experimental Psychology: Applied, 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short

  18. The role of pattern recognition in creative problem solving: a case study in search of new mathematics for biology.

    PubMed

    Hong, Felix T

    2013-09-01

    Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach

  19. Neural evidence for distracter suppression during visual search in real-world scenes.

    PubMed

    Seidl, Katharina N; Peelen, Marius V; Kastner, Sabine

    2012-08-22

    Selecting visual information from cluttered real-world scenes involves the matching of visual input to the observer's attentional set--an internal representation of objects that are relevant for current behavioral goals. When goals change, a new attentional set needs to be instantiated, requiring the suppression of the previous set to prevent distraction by objects that are no longer relevant. In the present fMRI study, we investigated how such suppression is implemented at the neural level. We measured human brain activity in response to natural scene photographs that could contain objects from (1) a currently relevant (target) category, (2) a previously but not presently relevant (distracter) category, and/or (3) a never relevant (neutral) category. Across conditions, multivoxel response patterns in object-selective cortex carried information about objects present in the scenes. However, this information strongly depended on the task relevance of the objects. As expected, information about the target category was significantly increased relative to the neutral category, indicating top-down enhancement of task-relevant information. Importantly, information about the distracter category was significantly reduced relative to the neutral category, indicating that the processing of previously relevant objects was suppressed. Such active suppression at the level of high-order visual cortex may serve to prevent the erroneous selection of, or interference from, objects that are no longer relevant to ongoing behavior. We conclude that the enhancement of relevant information and the suppression of distracting information both contribute to the efficient selection of visual information from cluttered real-world scenes.

  20. The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search.

    PubMed

    Castel, Alan D; Pratt, Jay; Drummond, Emily

    2005-06-01

    The ability to efficiently search the visual environment is a critical function of the visual system, and recent research has shown that experience playing action video games can influence visual selective attention. The present research examined the similarities and differences between video game players (VGPs) and non-video game players (NVGPs) in terms of the ability to inhibit attention from returning to previously attended locations, and the efficiency of visual search in easy and more demanding search environments. Both groups were equally good at inhibiting the return of attention to previously cued locations, although VGPs displayed overall faster reaction times to detect targets. VGPs also showed overall faster response time for easy and difficult visual search tasks compared to NVGPs, largely attributed to faster stimulus-response mapping. The findings suggest that relative to NVGPs, VGPs rely on similar types of visual processing strategies but possess faster stimulus-response mappings in visual attention tasks.

  1. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  2. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  3. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  4. Measuring the impact of health policies using Internet search patterns: the case of abortion

    PubMed Central

    2010-01-01

    Background Internet search patterns have emerged as a novel data source for monitoring infectious disease trends. We propose that these data can also be used more broadly to study the impact of health policies across different regions in a more efficient and timely manner. Methods As a test use case, we studied the relationships between abortion-related search volume, local abortion rates, and local abortion policies available for study. Results Our initial integrative analysis found that, both in the US and internationally, the volume of Internet searches for abortion is inversely proportional to local abortion rates and directly proportional to local restrictions on abortion. Conclusion These findings are consistent with published evidence that local restrictions on abortion lead individuals to seek abortion services outside of their area. Further validation of these methods has the potential to produce a timely, complementary data source for studying the effects of health policies. PMID:20738850

  5. An exploration of search patterns and credibility issues among older adults seeking online health information.

    PubMed

    Robertson-Lang, Laura; Major, Sonya; Hemming, Heather

    2011-12-01

    The Internet is an important resource for health information, among younger and older people alike. Unfortunately, there are limitations associated with online health information. Research is needed on the quality of information found online and on whether users are being critical consumers of the information they find. Also, there is a need for research investigating online use among adults aged 65 and over - a rapidly growing demographic of Internet users. The current study presents important descriptive data about the search patterns of older adults seeking online health information, the types of health topics they research, and whether they consider credibility issues when retrieving online health information. A comparison is also made between search strategies used in printed text and hypertext environments. The results, which have implications with respect to credibility issues, highlight the need to increase awareness about critical searching skills among older adult Internet users.

  6. Practice Makes Improvement: How Adults with Autism Out-Perform Others in a Naturalistic Visual Search Task

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Martin, Jolie M.; Minshew, Nancy J.; Behrmann, Marlene

    2013-01-01

    People with autism spectrum disorder (ASD) often exhibit superior performance in visual search compared to others. However, most studies demonstrating this advantage have employed simple, uncluttered images with fully visible targets. We compare the performance of high-functioning adults with ASD and matched controls on a naturalistic luggage…

  7. Age-Related Occipito-Temporal Hypoactivation during Visual Search: Relationships between mN2pc Sources and Performance

    ERIC Educational Resources Information Center

    Lorenzo-Lopez, L.; Gutierrez, R.; Moratti, S.; Maestu, F.; Cadaveira, F.; Amenedo, E.

    2011-01-01

    Recently, an event-related potential (ERP) study (Lorenzo-Lopez et al., 2008) provided evidence that normal aging significantly delays and attenuates the electrophysiological correlate of the allocation of visuospatial attention (N2pc component) during a feature-detection visual search task. To further explore the effects of normal aging on the…

  8. How Prior Knowledge and Colour Contrast Interfere Visual Search Processes in Novice Learners: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Sonmez, Duygu; Altun, Arif; Mazman, Sacide Guzin

    2012-01-01

    This study investigates how prior content knowledge and prior exposure to microscope slides on the phases of mitosis effect students' visual search strategies and their ability to differentiate cells that are going through any phases of mitosis. Two different sets of microscope slide views were used for this purpose; with high and low colour…

  9. A Clash of Bottom-Up and Top-Down Processes in Visual Search: The Reversed Letter Effect Revisited

    ERIC Educational Resources Information Center

    Zhaoping, Li; Frith, Uta

    2011-01-01

    It is harder to find the letter "N" among its mirror reversals than vice versa, an inconvenient finding for bottom-up saliency accounts based on primary visual cortex (V1) mechanisms. However, in line with this account, we found that in dense search arrays, gaze first landed on either target equally fast. Remarkably, after first landing,…

  10. Visual Search in Ecological and Non-Ecological Displays: Evidence for a Non-Monotonic Effect of Complexity on Performance

    PubMed Central

    Chassy, Philippe; Gobet, Fernand

    2013-01-01

    Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a “pop out” effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

  11. A Game of Hide and Seek: Expectations of Clumpy Resources Influence Hiding and Searching Patterns

    PubMed Central

    Wilke, Andreas; Minich, Steven; Panis, Megane; Langen, Tom A.; Skufca, Joseph D.; Todd, Peter M.

    2015-01-01

    Resources are often distributed in clumps or patches in space, unless an agent is trying to protect them from discovery and theft using a dispersed distribution. We uncover human expectations of such spatial resource patterns in collaborative and competitive settings via a sequential multi-person game in which participants hid resources for the next participant to seek. When collaborating, resources were mostly hidden in clumpy distributions, but when competing, resources were hidden in more dispersed (random or hyperdispersed) patterns to increase the searching difficulty for the other player. More dispersed resource distributions came at the cost of higher overall hiding (as well as searching) times, decreased payoffs, and an increased difficulty when the hider had to recall earlier hiding locations at the end of the experiment. Participants’ search strategies were also affected by their underlying expectations, using a win-stay lose-shift strategy appropriate for clumpy resources when searching for collaboratively-hidden items, but moving equally far after finding or not finding an item in competitive settings, as appropriate for dispersed resources. Thus participants showed expectations for clumpy versus dispersed spatial resources that matched the distributions commonly found in collaborative versus competitive foraging settings. PMID:26154661

  12. A Game of Hide and Seek: Expectations of Clumpy Resources Influence Hiding and Searching Patterns.

    PubMed

    Wilke, Andreas; Minich, Steven; Panis, Megane; Langen, Tom A; Skufca, Joseph D; Todd, Peter M

    2015-01-01

    Resources are often distributed in clumps or patches in space, unless an agent is trying to protect them from discovery and theft using a dispersed distribution. We uncover human expectations of such spatial resource patterns in collaborative and competitive settings via a sequential multi-person game in which participants hid resources for the next participant to seek. When collaborating, resources were mostly hidden in clumpy distributions, but when competing, resources were hidden in more dispersed (random or hyperdispersed) patterns to increase the searching difficulty for the other player. More dispersed resource distributions came at the cost of higher overall hiding (as well as searching) times, decreased payoffs, and an increased difficulty when the hider had to recall earlier hiding locations at the end of the experiment. Participants' search strategies were also affected by their underlying expectations, using a win-stay lose-shift strategy appropriate for clumpy resources when searching for collaboratively-hidden items, but moving equally far after finding or not finding an item in competitive settings, as appropriate for dispersed resources. Thus participants showed expectations for clumpy versus dispersed spatial resources that matched the distributions commonly found in collaborative versus competitive foraging settings.

  13. Early visual tagging: effects of target-distractor similarity and old age on search, subitization, and counting.

    PubMed

    Watson, Derrick G; Maylor, Elizabeth A; Allen, Gareth E J; Bruce, Lucy A M

    2007-06-01

    Three experiments examined the effects of target-distractor (T-D) similarity and old age on the efficiency of searching for single targets and enumerating multiple targets. Experiment 1 showed that increasing T-D similarity selectively reduced the efficiency of enumerating small (< 4) numerosities (subitizing) but had little effect on enumerating larger numerosities (counting) or searching for a single target. Experiment 2 provided converging evidence using fixation frequencies and a finer range of T-D similarities. Experiment 3 showed that T-D similarity had a greater impact on older than on young adults, but only for subitizing. The data are discussed in terms of the mechanisms and architecture of early visual tagging, dissociable effects in search and enumeration, and the effects of aging on visual processing.

  14. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  15. The effects of circadian phase, time awake, and imposed sleep restriction on performing complex visual tasks: Evidence from comparative visual search

    PubMed Central

    Pomplun, Marc; Silva, Edward J.; Ronda, Joseph M.; Cain, Sean W.; Münch, Mirjam Y.; Czeisler, Charles A.; Duffy, Jeanne F.

    2012-01-01

    Cognitive performance not only differs between individuals, but also varies within them, influenced by factors that include sleep-wakefulness and biological time of day (circadian phase). Previous studies have shown that both factors influence accuracy rather than the speed of performing a visual search task, which can be hazardous in safety-critical tasks such as air-traffic control or baggage screening. However, prior investigations used simple, brief search tasks requiring little use of working memory. In order to study the effects of circadian phase, time awake, and chronic sleep restriction on the more realistic scenario of longer tasks requiring the sustained interaction of visual working memory and attentional control, the present study employed two comparative visual search tasks. In these tasks, participants had to detect a mismatch between two otherwise identical object distributions, with one of the tasks (mirror task) requiring an additional mental image transformation. Time awake and circadian phase both had significant influences on the speed, but not the accuracy of task performance. Over the course of three weeks of chronic sleep restriction, speed but not accuracy of task performance was impacted. The results suggest measures for safer performance of important tasks and point out the importance of minimizing the impact of circadian phase and sleep-wake history in laboratory vision experiments. PMID:22836655

  16. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    NASA Astrophysics Data System (ADS)

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-01-01

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  17. Landmark based shape analysis for cerebellar ataxia classification and cerebellar atrophy pattern visualization

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules.

  18. An exploratory study of the potential of LIBS for visualizing gunshot residue patterns.

    PubMed

    López-López, María; Alvarez-Llamas, César; Pisonero, Jorge; García-Ruiz, Carmen; Bordel, Nerea

    2017-04-01

    The study of gunshot residue (GSR) patterns can assist in the reconstruction of shooting incidences. Currently, there is a real need of methods capable of furnishing simultaneous elemental analysis with higher specificity for the GSR pattern visualization. Laser-Induced Breakdown Spectroscopy (LIBS) provides a multi-elemental analysis of the sample, requiring very small amounts of material and no sample preparation. Due to these advantages, this study aims at exploring the potential of LIBS imaging for the visualization of GSR patterns. After the spectral characterization of individual GSR particles, the distribution of Pb, Sb and Ba over clothing targets, shot from different distances, were measured in laser raster mode. In particular, an array of spots evenly spaced at 800μm, using a stage displacement velocity of 4mm/s and a laser frequency of 5Hz was employed (e.g. an area of 130×165mm(2) was measured in less than 3h). A LIBS set-up based on the simultaneous use of two spectrographs with iCCD cameras and a motorized stage was used. This set-up allows obtaining information from two different wavelength regions (258-289 and 446-463nm) from the same laser induced plasma, enabling the simultaneous detection of the three characteristic elements (Pb, Sb, and Ba) of GSR particles from conventional ammunitions. The ability to visualize the 2D distribution GSR pattern by LIBS may have an important application in the forensic field, especially for the ballistics area.

  19. Landmark Based Shape Analysis for Cerebellar Ataxia Classification and Cerebellar Atrophy Pattern Visualization

    PubMed Central

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-01-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules. PMID:27303111

  20. Oculomotor Capture by New and Unannounced Color Singletons during Visual Search.

    PubMed

    Retell, James D; Venini, Dustin; Becker, Stefanie I

    2015-07-01

    The surprise capture hypothesis states that a stimulus will capture attention to the extent that it is preattentively available and deviates from task-expectancies. Interestingly, it has been noted by Horstmann (Psychological Science 13: 499-505. doi: 10.1111/1467-9280.00488, 2002, Human Perception and Performance 31: 1039-1060. doi: 10.1037/00961523.31.5.1039, 2005, Psychological Research, 70, 13-25, 2006) that the time course of capture by such classes of stimuli appears distinct from that of capture by expected stimuli. Specifically, attention shifts to an unexpected stimulus are delayed relative to an expected stimulus (delayed onset account). Across two experiments, we investigated this claim under conditions of unguided (Exp. 1) and guided (Exp. 2) search using eye-movements as the primary index of attentional selection. In both experiments, we found strong evidence of surprise capture for the first presentation of an unannounced color singleton. However, in both experiments the pattern of eye-movements was not consistent with a delayed onset account of attention capture. Rather, we observed costs associated with the unexpected stimulus only once the target had been selected. We propose an interference account of surprise capture to explain our data and argue that this account also can explain existing patterns of data in the literature.

  1. Response variability of frontal eye field neurons modulates with sensory input and saccade preparation but not visual search salience

    PubMed Central

    Purcell, Braden A.; Heitz, Richard P.; Cohen, Jeremiah Y.

    2012-01-01

    Discharge rate modulation of frontal eye field (FEF) neurons has been identified with a representation of visual search salience (physical conspicuity and behavioral relevance) and saccade preparation. We tested whether salience or saccade preparation are evident in the trial-to-trial variability of discharge rate. We quantified response variability via the Fano factor in FEF neurons recorded in monkeys performing efficient and inefficient visual search tasks. Response variability declined following stimulus presentation in most neurons, but despite clear discharge rate modulation, variability did not change with target salience. Instead, we found that response variability was modulated by stimulus luminance and the number of items in the visual field independently of attentional demands. Response variability declined to a minimum before saccade initiation, and presaccadic response variability was directionally tuned. In addition, response variability was correlated with the response time of memory-guided saccades. These results indicate that the trial-by-trial response variability of FEF neurons reflects saccade preparation and the strength of sensory input, but not visual search salience or attentional allocation. PMID:22956785

  2. A comparison of visual search strategies of elite and non-elite tennis players through cluster analysis.

    PubMed

    Murray, Nicholas P; Hunfalvay, Melissa

    2017-02-01

    Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.

  3. Preserved Suppression of Salient Irrelevant Stimuli During Visual Search in Age-Associated Memory Impairment.

    PubMed

    Lorenzo-López, Laura; Maseda, Ana; Buján, Ana; de Labra, Carmen; Amenedo, Elena; Millán-Calenti, José C

    2015-01-01

    Previous studies have suggested that older adults with age-associated memory impairment (AAMI) may show a significant decline in attentional resource capacity and inhibitory processes in addition to memory impairment. In the present paper, the potential attentional capture by task-irrelevant stimuli was examined in older adults with AAMI compared to healthy older adults using scalp-recorded event-related brain potentials (ERPs). ERPs were recorded during the execution of a visual search task, in which the participants had to detect the presence of a target stimulus that differed from distractors by orientation. To explore the automatic attentional capture phenomenon, an irrelevant distractor stimulus defined by a different feature (color) was also presented without previous knowledge of the participants. A consistent N2pc, an electrophysiological indicator of attentional deployment, was present for target stimuli but not for task-irrelevant color stimuli, suggesting that these irrelevant distractors did not attract attention in AAMI older adults. Furthermore, the N2pc for targets was significantly delayed in AAMI patients compared to healthy older controls. Together, these findings suggest a specific impairment of the attentional selection process of relevant target stimuli in these individuals and indicate that the mechanism of top-down suppression of entirely task-irrelevant stimuli is preserved, at least when the target and the irrelevant stimuli are perceptually very different.

  4. Searching for Category-Consistent Features: A Computational Approach to Understanding Visual Category Representation.

    PubMed

    Yu, Chen-Ping; Maxfield, Justin T; Zelinsky, Gregory J

    2016-06-01

    This article introduces a generative model of category representation that uses computer vision methods to extract category-consistent features (CCFs) directly from images of category exemplars. The model was trained on 4,800 images of common objects, and CCFs were obtained for 68 categories spanning subordinate, basic, and superordinate levels in a category hierarchy. When participants searched for these same categories, targets cued at the subordinate level were preferentially fixated, but fixated targets were verified faster when they followed a basic-level cue. The subordinate-level advantage in guidance is explained by the number of target-category CCFs, a measure of category specificity that decreases with movement up the category hierarchy. The basic-level advantage in verification is explained by multiplying the number of CCFs by sibling distance, a measure of category distinctiveness. With this model, the visual representations of real-world object categories, each learned from the vast numbers of image exemplars accumulated throughout everyday experience, can finally be studied.

  5. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    PubMed

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias.

  6. Target-distractor similarity has a larger impact on visual search in school-age children than spacing.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke

    2015-01-22

    In typically developing children, crowding decreases with increasing age. The influence of target-distractor similarity with respect to orientation and element spacing on visual search performance was investigated in 29 school-age children with normal vision (4- to 6-year-olds [N = 16], 7- to 8-year-olds [N = 13]). Children were instructed to search for a target E among distractor Es (feature search: all flanking Es pointing right; conjunction search: flankers in three orientations). Orientation of the target was manipulated in four directions: right (target absent), left (inversed), up, and down (vertical). Spacing was varied in four steps: 0.04°, 0.5°, 1°, and 2°. During feature search, high target-distractor similarity had a stronger impact on performance than spacing: Orientation affected accuracy until spacing was 1°, and spacing only influenced accuracy for identifying inversed targets. Spatial analyses showed that orientation affected oculomotor strategy: Children made more fixations in the "inversed" target area (4.6) than the vertical target areas (1.8 and 1.9). Furthermore, age groups differed in fixation duration: 4- to 6-year-old children showed longer fixation durations than 7- to 8-year-olds at the two largest element spacings (p = 0.039 and p = 0.027). Conjunction search performance was unaffected by spacing. Four conclusions can be drawn from this study: (a) Target-distractor similarity governs visual search performance in school-age children, (b) children make more fixations in target areas when target-distractor similarity is high, (c) 4- to 6-year-olds show longer fixation durations than 7- to 8-year-olds at 1° and 2° element spacing, and (d) spacing affects feature but not conjunction search-a finding that might indicate top-down control ameliorates crowding in children.

  7. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1983-01-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  8. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1984-01-01

    Various flow visualization techniques were used to define the seondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious. Previously announced in STAR as N83-14435

  9. Noun representation in AAC grid displays: visual attention patterns of people with traumatic brain injury.

    PubMed

    Brown, Jessica; Thiessen, Amber; Beukelman, David; Hux, Karen

    2015-03-01

    Clinicians supporting the communication of people with traumatic brain injury (TBI) must determine an efficient message representation method for augmentative and alternative communication (AAC) systems. Due to the frequency with which visual deficits occur following brain injury, some adults with TBI may have difficulty locating items on AAC displays. The purpose of this study was to identify aspects of graphic supports that increase efficiency of target-specific visual searches. Nine adults with severe TBI and nine individuals without neurological conditions located targets on static grids displaying one of three message representation methods. Data collected through eye tracking technology revealed significantly more efficient target location for icon-only grids than for text-only or icon-plus-text grids for both participant groups; no significant differences emerged between participant groups.

  10. Optimal search patterns in honeybee orientation flights are robust against emerging infectious diseases

    PubMed Central

    Wolf, Stephan; Nicholls, Elizabeth; Reynolds, Andrew M.; Wells, Patricia; Lim, Ka S.; Paxton, Robert J.; Osborne, Juliet L.

    2016-01-01

    Lévy flights are scale-free (fractal) search patterns found in a wide range of animals. They can be an advantageous strategy promoting high encounter rates with rare cues that may indicate prey items, mating partners or navigational landmarks. The robustness of this behavioural strategy to ubiquitous threats to animal performance, such as pathogens, remains poorly understood. Using honeybees radar-tracked during their orientation flights in a novel landscape, we assess for the first time how two emerging infectious diseases (Nosema sp. and the Varroa-associated Deformed wing virus (DWV)) affect bees’ behavioural performance and search strategy. Nosema infection, unlike DWV, affected the spatial scale of orientation flights, causing significantly shorter and more compact flights. However, in stark contrast to disease-dependent temporal fractals, we find the same prevalence of optimal Lévy flight characteristics (μ ≈ 2) in both healthy and infected bees. We discuss the ecological and evolutionary implications of these surprising insights, arguing that Lévy search patterns are an emergent property of fundamental characteristics of neuronal and sensory components of the decision-making process, making them robust against diverse physiological effects of pathogen infection and possibly other stressors. PMID:27615605

  11. Exploratory Data Analysis Using a Dedicated Visualization App: Looking for Patterns in Volcanic Activity

    NASA Astrophysics Data System (ADS)

    van Manen, S. M.; Chen, S.

    2015-12-01

    Here we present an App designed to visualize and identify patterns in volcanic activity during the last ten years. It visualizes VEI (volcanic explosivity index) levels, population size, frequency of activity, and geographic region, and is designed to address the issue of oversampling of data. Often times, it is difficult to access a large set of data that can be scattered at first glance and hard to digest without visual aid. This App serves as a model that solves this issue and can be applied to other data. To enable users to quickly assess the large data set it breaks down the apparently chaotic abundance of information into categories and graphic indicators: color is used to indicate the VEI level, size for population size within 5 km of a volcano, line thickness for frequency of activity, and a grid to pinpoint a volcano's latitude. The categories and layers within them can be turned on and off by the user, enabling them to scroll through and compare different layers of data. By visualising the data this way, patterns began to emerge. For example, certain geographic regions had more explosive eruptions than others. Another good example was that low frequency larger impact volcanic eruptions occurred more irregularly than smaller impact volcanic eruptions, which had a more stable frequencies. Although these findings are not unexpected, the easy to navigate App does showcase the potential of data visualization for the rapid appraisal of complex and abundant multi-dimensional geoscience data.

  12. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns.

    PubMed

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  13. Giant honeybees ( Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees ( Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  14. A Note on Drawing Conclusions in the Study of Visual Search and the Use of Slopes in Particular

    PubMed Central

    2016-01-01

    The slope of the set size function as a critical statistic first gained favor in the 1960s due in large part to the seminal papers on short-term memory search by Saul Sternberg and soon, many others. In the 1980s, the slope statistic reemerged in much the same role in visual search as Anne Treisman and again, soon many others brought that research topic into great prominence. This note offers the historical and current perspective of the present author, who has devoted a significant portion of his theoretical efforts to this and related topics over the past 50 years. PMID:27895884

  15. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  16. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  17. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  18. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    SciTech Connect

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-10-09

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASAWorld Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the hot spot areas of research. Most importantly, our methods demonstrate an easy way to geovisualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  19. Three dimensional pattern recognition using feature-based indexing and rule-based search

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Kyu

    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

  20. Perceptual factors influence visual search for meaningful symbols in individuals with intellectual disabilities and Down syndrome or autism spectrum disorders.

    PubMed

    Wilkinson, Krista M; McIlvane, William J

    2013-09-01

    Augmentative and alternative communication (AAC) systems often supplement oral communication for individuals with intellectual and communication disabilities. Research with preschoolers without disabilities has demonstrated that two visual-perceptual factors influence speed and/or accuracy of finding a target: the internal color and spatial organization of symbols. Twelve participants with Down syndrome and 12 with autism spectrum disorders (ASDs) completed two search tasks. In one, the symbols were clustered by internal color; in the other, the identical symbols had no arrangement cue. Visual search was superior in participants with ASDs compared to those with Down syndrome. In both groups, responses were significantly faster when the symbols were clustered by internal color. Construction of aided AAC displays may benefit from attention to their physical and perceptual features.