Science.gov

Sample records for visual search patterns

  1. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  2. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  3. Visual search.

    PubMed

    Chan, Louis K H; Hayward, William G

    2013-07-01

    Visual search is the act of looking for a predefined target among other objects. This task has been widely used as an experimental paradigm to study visual attention, and because of its influence has also become a subject of research itself. When used as a paradigm, visual search studies address questions including the nature, function, and limits of preattentive processing and focused attention. As a subject of research, visual search studies address the role of memory in search, the procedures involved in search, and factors that affect search performance. In this article, we review major theories of visual search, the ways in which preattentive information is used to guide attentional allocation, the role of memory, and the processes and decisions involved in its successful completion. We conclude by summarizing the current state of knowledge about visual search and highlight some unresolved issues. WIREs Cogn Sci 2013, 4:415-429. doi: 10.1002/wcs.1235 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  4. Visual search patterns in semantic dementia show paradoxical facilitation of binding processes

    PubMed Central

    Viskontas, Indre V.; Boxer, Adam L.; Fesenko, John; Matlin, Alisa; Heuer, Hilary W.; Mirsky, Jacob; Miller, Bruce L.

    2011-01-01

    While patients with Alzheimer’s disease (AD) show deficits in attention, manifested by inefficient performance on visual search, new visual talents can emerge in patients with frontotemporal lobar degeneration (FTLD), suggesting that, at least in some of the patients, visual attention is spared, if not enhanced. To investigate the underlying mechanisms for visual talent in FTLD (behavioral variant FTD [bvFTD] and semantic dementia [SD]) patients, we measured performance on a visual search paradigm that includes both feature and conjunction search, while simultaneously monitoring saccadic eye movements. AD patients were impaired relative to healthy controls (NC) and FTLD patients on both feature and conjunction search. BvFTD patients showed less accurate performance only on the conjunction search task, but slower response times than NC on all three tasks. In contrast, SD patients were as accurate as controls and had faster response times when faced with the largest number of distracters in the conjunction search task. Measurement of saccades during visual search showed that AD patients explored more of the image, whereas SD patients explored less of the image before making a decision as to whether the target was present. Performance on the conjunction search task positively correlated with gray matter volume in the superior parietal lobe, precuneus, middle frontal gyrus and superior temporal gyrus. These data suggest that despite the presence of extensive temporal lobe degeneration, visual talent in SD may be facilitated by more efficient visual search under distracting conditions due to enhanced function in the dorsal frontoparietal attention network. PMID:21215762

  5. Changes in visual search patterns of pathology residents as they gain experience

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Weinstein, Ronald S.

    2011-03-01

    The goal of this study was to examine and characterize changes in the ways that pathology residents examine digital or "virtual" slides as they gain more experience. A series of 20 digitized breast biopsy virtual slides (half benign and half malignant) were shown to 6 pathology residents at three points in time - at the beginning of their first year of residency, at the beginning of the second year, and at the beginning of the third year. Their task was to examine each image and select three areas that they would most want to zoom on in order to view the diagnostic detail at higher resolution. Eye position was recorded as they scanned each image. The data indicate that with each successive year of experience, the residents' search patterns do change. Overall it takes significantly less time to view an individual slide and decide where to zoom, significantly fewer fixations are generated overall, and there is less examination of non-diagnostic areas. Essentially, the residents' search becomes much more efficient and after only one year closely resembles that of an expert pathologist. These findings are similar to those in radiology, and support the theory that an important aspect of the development of expertise is improved pattern recognition (taking in more information during the initial Gestalt or gist view) as well as improved allocation of attention and visual processing resources.

  6. Interrupted Visual Searches Reveal Volatile Search Memory

    ERIC Educational Resources Information Center

    Shen, Y. Jeremy; Jiang, Yuhong V.

    2006-01-01

    This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…

  7. Reconsidering Visual Search

    PubMed Central

    2015-01-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  8. Reconsidering Visual Search.

    PubMed

    Kristjánsson, Árni

    2015-12-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  9. Pandemonium and visual search.

    PubMed

    Henderson, L

    1978-01-01

    Pandemonium-like models have played a central role in theories of perceptual recognition. One model is examined which asserts that information is sorted unidirectionally through a hierarchy of increasingly abstract levels only to a depth required by the logical demands of the task and read off from the appropriate level to control response decisions. The support originally claimed for the model in terms of its application to visual search performance is questioned. It is suggested that the pervasiveness of such models is not due to their competition with alternative theories but rather to methatheoretic considerations.

  10. Parallel Processing in Visual Search Asymmetry

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2004-01-01

    The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

  11. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  12. Visual Search of Mooney Faces.

    PubMed

    Goold, Jessica E; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention.

  13. Visual similarity effects in categorical search.

    PubMed

    Alexander, Robert G; Zelinsky, Gregory J

    2011-07-14

    We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets.

  14. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  15. The development of organized visual search.

    PubMed

    Woods, Adam J; Göksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E

    2013-06-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search.

  16. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  17. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  18. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  19. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  20. Aurally and visually guided visual search in a virtual environment.

    PubMed

    Flanagan, P; McAnally, K I; Martin, R L; Meehan, J W; Oldfield, S R

    1998-09-01

    We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information.

  1. Cumulative Intertrial Inhibition in Repeated Visual Search

    ERIC Educational Resources Information Center

    Takeda, Yuji

    2007-01-01

    In the present study the author examined visual search when the items remain visible across trials but the location of the target varies. Reaction times for inefficient search cumulatively increased with increasing numbers of repeated search trials, suggesting that inhibition for distractors carried over successive trials. This intertrial…

  2. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  3. Cascade category-aware visual search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Huang, Qingming; Gao, Wen; Rui, Yong

    2014-06-01

    Incorporating image classification into image retrieval system brings many attractive advantages. For instance, the search space can be narrowed down by rejecting images in irrelevant categories of the query. The retrieved images can be more consistent in semantics by indexing and returning images in the relevant categories together. However, due to their different goals on recognition accuracy and retrieval scalability, it is hard to efficiently incorporate most image classification works into large-scale image search. To study this problem, we propose cascade category-aware visual search, which utilizes weak category clue to achieve better retrieval accuracy, efficiency, and memory consumption. To capture the category and visual clues of an image, we first learn category-visual words, which are discriminative and repeatable local features labeled with categories. By identifying category-visual words in database images, we are able to discard noisy local features and extract image visual and category clues, which are hence recorded in a hierarchical index structure. Our retrieval system narrows down the search space by: 1) filtering the noisy local features in query; 2) rejecting irrelevant categories in database; and 3) preforming discriminative visual search in relevant categories. The proposed algorithm is tested on object search, landmark search, and large-scale similar image search on the large-scale LSVRC10 data set. Although the category clue introduced is weak, our algorithm still shows substantial advantages in retrieval accuracy, efficiency, and memory consumption than the state-of-the-art.

  4. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  5. Words, shape, visual search and visual working memory in 3-year-old children

    PubMed Central

    Vales, Catarina; Smith, Linda B.

    2014-01-01

    Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  6. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  7. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information.

  8. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  9. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  10. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  11. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  12. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  13. Perceptual Encoding Efficiency in Visual Search

    ERIC Educational Resources Information Center

    Rauschenberger, Robert; Yantis, Steven

    2006-01-01

    The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

  14. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  15. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  16. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  17. Recollection can support hybrid visual memory search.

    PubMed

    Guild, Emma B; Cripps, Jenna M; Anderson, Nicole D; Al-Aidroos, Naseem

    2014-02-01

    On a daily basis, we accomplish the task of searching our visual environment for one of a number of possible objects, like searching for any one of our friends in a crowd, and we do this with ease. Understanding how attention, perception, and long-term memory interact to accomplish this process remains an important question. Recent research (Wolfe in Psychological Science 23:698-703, 2012) has shown that increasing the number of possible targets one is searching for adds little cost to the efficiency of visual search-specifically, that response times increase logarithmically with memory set size. It is unclear, however, what type of recognition memory process (familiarity or recollection) supports a hybrid visual memory search. Previous hybrid search paradigms create conditions that allow participants to rely on the familiarity of perceptually identical targets. In two experiments, we show that hybrid search remains efficient even when the familiarity of targets is minimized (Experiment 1) and when participants are encouraged to flexibly retrieve target information that is perceptually distinct from the information previously studied (Experiment 2). We propose that such efficient and flexible performance on a hybrid search task may engage a rapid from of recollection (Moscovitch in Canadian Journal of Experimental Psychology 62:62-79, 2008). We discuss possible neural correlates supporting simultaneous perception, comparison of incoming information, and recollection of episodic memories.

  18. Visual reinforcement shapes eye movements in visual search.

    PubMed

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task. PMID:27559719

  19. Visual search under scotopic lighting conditions.

    PubMed

    Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M; Geisler, Wilson S; Gegenfurtner, Karl R

    2015-08-01

    When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases. PMID:25988753

  20. Visual search under scotopic lighting conditions.

    PubMed

    Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M; Geisler, Wilson S; Gegenfurtner, Karl R

    2015-08-01

    When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases.

  1. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  2. Dynamic Prototypicality Effects in Visual Search

    ERIC Educational Resources Information Center

    Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan

    2011-01-01

    In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…

  3. Subsymmetries predict auditory and visual pattern complexity.

    PubMed

    Toussaint, Godfried T; Beltran, Juan F

    2013-01-01

    A mathematical measure of pattern complexity based on subsymmetries possessed by the pattern, previously shown to correlate highly with empirically derived measures of cognitive complexity in the visual domain, is found to also correlate significantly with empirically derived complexity measures of perception and production of auditory temporal and musical rhythmic patterns. Not only does the subsymmetry measure correlate highly with the difficulty of reproducing the rhythms by tapping after listening to them, but also the empirical measures exhibit similar behavior, for both the visual and auditory patterns, as a function of the relative number of subsymmetries present in the patterns. PMID:24494441

  4. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  5. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  6. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  7. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  8. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  9. Interpreting chest radiographs without visual search.

    PubMed

    Kundel, H L; Nodine, C F

    1975-09-01

    Ten radiologists were shown a series of 10 normal and 10 abnormal chest films under two viewing conditions: a 0.2-second flash and unlimited viewing time. The results were compared in terms of verbal content, diagnostic accuracy, and level of confidence. The overall accuracy was surprisingly high (70% true positives) considering that no search was possible. Performance improved as expected with free search (97% true positives). These data support the hypothesis that visual search begins with a global response that establishes content, detects gross deviations from normal, and organizes subsequent foveal checking fixations to conduct a detailed examination of ambiguities. The total search strategy then consists of an ordered sequence of interspersed global and checking fixations. PMID:125436

  10. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  11. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-02-11

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  12. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  13. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  14. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  15. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  16. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  17. Cardiac and Respiratory Responses During Visual Search in Nonretarded Children and Retarded Adolescents

    ERIC Educational Resources Information Center

    Porges, Stephen W.; Humphrey, Mary M.

    1977-01-01

    The relationship between physiological response patterns and mental competence was investigated by evaluating heart rate and respiratory responses during a sustained visual-search task in 29 nonretarded grade school children and 16 retarded adolescents. (Author)

  18. Configural learning in contextual cuing of visual search.

    PubMed

    Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R

    2016-08-01

    Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record PMID:26913779

  19. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  20. Adaptation and visual search in mammographic images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2015-05-01

    Radiologists face the visually challenging task of detecting suspicious features within the complex and noisy backgrounds characteristic of medical images. We used a search task to examine whether the salience of target features in x-ray mammograms could be enhanced by prior adaptation to the spatial structure of the images. The observers were not radiologists, and thus had no diagnostic training with the images. The stimuli were randomly selected sections from normal mammograms previously classified with BIRADS Density scores of "fatty" versus "dense," corresponding to differences in the relative quantities of fat versus fibroglandular tissue. These categories reflect conspicuous differences in visual texture, with dense tissue being more likely to obscure lesion detection. The targets were simulated masses corresponding to bright Gaussian spots, superimposed by adding the luminance to the background. A single target was randomly added to each image, with contrast varied over five levels so that they varied from difficult to easy to detect. Reaction times were measured for detecting the target location, before or after adapting to a gray field or to random sequences of a different set of dense or fatty images. Observers were faster at detecting the targets in either dense or fatty images after adapting to the specific background type (dense or fatty) that they were searching within. Thus, the adaptation led to a facilitation of search performance that was selective for the background texture. Our results are consistent with the hypothesis that adaptation allows observers to more effectively suppress the specific structure of the background, thereby heightening visual salience and search efficiency.

  1. GSDT: An integrative model of visual search.

    PubMed

    Schwarz, Wolf; Miller, Jeff

    2016-10-01

    We present a new quantitative process model (GSDT) of visual search that seeks to integrate various processing mechanisms suggested by previous studies within a single, coherent conceptual frame. It incorporates and combines 4 distinct model components: guidance (G), a serial (S) item inspection process, diffusion (D) modeling of individual item inspections, and a strategic termination (T) rule. For this model, we derive explicit closed-form results for response probability and mean search time (reaction time [RT]) as a function of display size and target presence/absence. The fit of the model is compared in detail to data from 4 visual search experiments in which the effects of target/distractor discriminability and of target prevalence on performance (present/absent display size functions for mean RT and error rate) are studied. We describe how GSDT accounts for various detailed features of our results such as the probabilities of hits and correct rejections and their mean RTs; we also apply the model to explain further aspects of the data, such as RT variance and mean miss RT. (PsycINFO Database Record

  2. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  3. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  4. Visual search in Dementia with Lewy Bodies and Alzheimer's disease.

    PubMed

    Landy, Kelly M; Salmon, David P; Filoteo, J Vincent; Heindel, William C; Galasko, Douglas; Hamilton, Joanne M

    2015-12-01

    Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer's disease (AD). To assess this possibility, the present study compared patients with DLB (n = 17), AD (n = 30), or Parkinson's disease with dementia (PDD; n = 10) to non-demented patients with PD (n = 18) and normal control (NC) participants (n = 13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target's salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., "pop-out" effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search "pop-out" effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex.

  5. Modeling spatial patterns in the visual cortex

    NASA Astrophysics Data System (ADS)

    Daza C., Yudy Carolina; Tauro, Carolina B.; Tamarit, Francisco A.; Gleiser, Pablo M.

    2014-10-01

    We propose a model for the formation of patterns in the visual cortex. The dynamical units of the model are Kuramoto phase oscillators that interact through a complex network structure embedded in two dimensions. In this way the strength of the interactions takes into account the geographical distance between units. We show that for different parameters, clustered or striped patterns emerge. Using the structure factor as an order parameter we are able to quantitatively characterize these patterns and present a phase diagram. Finally, we show that the model is able to reproduce patterns with cardinal preference, as observed in ferrets.

  6. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  7. Activation of phonological competitors in visual search.

    PubMed

    Görges, Frauke; Oppermann, Frank; Jescheniak, Jörg D; Schriefers, Herbert

    2013-06-01

    Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed.

  8. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  9. The neural basis of attentional control in visual search.

    PubMed

    Eimer, Martin

    2014-10-01

    How do we localise and identify target objects among distractors in visual scenes? The role of selective attention in visual search has been studied for decades and the outlines of a general processing model are now beginning to emerge. Attentional processes unfold in real time and this review describes four temporally and functionally dissociable stages of attention in visual search (preparation, guidance, selection, and identification). Insights from neuroscientific studies of visual attention suggest that our ability to find target objects in visual search is based on processes that operate at each of these four stages, in close association with working memory and recurrent feedback mechanisms.

  10. Influence of visual angle on pattern reversal visual evoked potentials

    PubMed Central

    Kothari, Ruchi; Singh, Smita; Singh, Ramji; Shukla, A. K.; Bokariya, Pradeep

    2014-01-01

    Purpose: The aim of this study was to find whether the visual evoked potential (VEP) latencies and amplitude are altered with different visual angles in healthy adult volunteers or not and to determine the visual angle which is the optimum and most appropriate among a wide range of check sizes for the reliable interpretation of pattern reversal VEPs (PRVEPs). Materials and Methods: The present study was conducted on 40 healthy volunteers. The subjects were divided into two groups. One group consisted of 20 individuals (nine males and 11 females) in the age range of 25-57 years and they were exposed to checks subtending a visual angle of 90, 120, and 180 minutes of arc. Another group comprised of 20 individuals (10 males and 10 females) in the age range of 36-60 years and they were subjected to checks subtending a visual angle of 15, 30, and 120 minutes of arc. The stimulus configuration comprised of the transient pattern reversal method in which a black and white checker board is generated (full field) on a VEP Monitor by an Evoked Potential Recorder (RMS EMG. EPMARK II). The statistical analysis was done by One Way Analysis of Variance (ANOVA) using EPI INFO 6. Results: In Group I, the maximum (max.) P100 latency of 98.8 ± 4.7 and the max. P100 amplitude of 10.05 ± 3.1 μV was obtained with checks of 90 minutes. In Group II, the max. P100 latency of 105.19 ± 4.75 msec as well as the max. P100 amplitude of 8.23 ± 3.30 μV was obtained with 15 minutes. The min. P100 latency in both the groups was obtained with checks of 120 minutes while the min. P100 amplitude was obtained with 180 minutes. A statistically significant difference was derived between means of P100 latency for 15 and 30 minutes with reference to its value for 120 minutes and between the mean value of P100 amplitude for 120 minutes and that of 90 and 180 minutes. Conclusion: Altering the size of stimulus (visual angle) has an effect on the PRVEP parameters. Our study found that the 120 is the

  11. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  12. Investigating attention in complex visual search.

    PubMed

    Kovach, Christopher K; Adolphs, Ralph

    2015-11-01

    How we attend to and search for objects in the real world is influenced by a host of low-level and higher-level factors whose interactions are poorly understood. The vast majority of studies approach this issue by experimentally controlling one or two factors in isolation, often under conditions with limited ecological validity. We present a comprehensive regression framework, together with a matlab-implemented toolbox, which allows concurrent factors influencing saccade targeting to be more clearly distinguished. Based on the idea of gaze selection as a point process, the framework allows each putative factor to be modeled as a covariate in a generalized linear model, and its significance to be evaluated with model-based hypothesis testing. We apply this framework to visual search for faces as an example and demonstrate its power in detecting effects of eccentricity, inversion, task congruency, emotional expression, and serial fixation order on the targeting of gaze. Among other things, we find evidence for multiple goal-related and goal-independent processes that operate with distinct visuotopy and time course.

  13. The pattern of visual deficits in amblyopia.

    PubMed

    McKee, Suzanne P; Levi, Dennis M; Movshon, J Anthony

    2003-01-01

    Amblyopia is usually defined as a deficit in optotype (Snellen) acuity with no detectable organic cause. We asked whether this visual abnormality is completely characterized by the deficit in optotype acuity, or whether it has distinct forms that are determined by the conditions associated with the acuity loss, such as strabismus or anisometropia. To decide this issue, we measured optotype acuity, Vernier acuity, grating acuity, contrast sensitivity, and binocular function in 427 adults with amblyopia or with risk factors for amblyopia and in a comparison group of 68 normal observers. Optotype acuity accounts for much of the variance in Vernier and grating acuity, and somewhat less of the variance in contrast sensitivity. Nevertheless, there are differences in the patterns of visual loss among the clinically defined categories, particularly between strabismic and anisometropic categories. We used factor analysis to create a succinct representation of our measurement space. This analysis revealed two main dimensions of variation in the visual performance of our abnormal sample, one related to the visual acuity measures (optotype, Vernier, and grating acuity) and the other related to the contrast sensitivity measures (Pelli-Robson and edge contrast sensitivity). Representing our data in this space reveals distinctive distributions of visual loss for different patient categories, and suggests that two consequences of the associated conditions--reduced resolution and loss of binocularity--determine the pattern of visual deficit. Non-binocular observers with mild-to-moderate acuity deficits have, on average, better monocular contrast sensitivity than do binocular observers with the same acuity loss. Despite their superior contrast sensitivity, non-binocular observers typically have poorer optotype acuity and Vernier acuity, at a given level of grating acuity, than those with residual binocular function. PMID:12875634

  14. Signatures of chaos in animal search patterns

    PubMed Central

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  15. Visual pattern memory without shape recognition.

    PubMed

    Dill, M; Heisenberg, M

    1995-08-29

    Visual pattern memory of Drosophila melanogaster at the torque meter is investigated by a new learning paradigm called novelty choice. In this procedure the fly is first exposed to four identical patterns presented at the wall of the cylinder surrounding it. In the test it has the choice between two pairs of patterns, a new one and one the same as the training pattern. Flies show a lasting preference for the new figure. Figures presented during training are not recognized as familiar in the test, if displayed (i) at a different height, (ii) at a different size, (iii) rotated or (iv) after contrast reversal. No special invariance mechanisms are found. A pixel-by-pixel matching process is sufficient to explain the observed data. Minor transfer effects can be explained if a graded similarity function is assumed. Recognition depends upon the overlap between the stored template and the actual image. The similarity function is best described by the ratio of the area of overlap to the area of the actual image. The similarity function is independent of the geometrical properties of the employed figures. Visual pattern memory at this basic level does not require the analysis of shape. PMID:8668723

  16. The target effect: visual memory for unnamed search targets.

    PubMed

    Thomas, Mark D; Williams, Carrick C

    2014-01-01

    Search targets are typically remembered much better than other objects even when they are viewed for less time. However, targets have two advantages that other objects in search displays do not have: They are identified categorically before the search, and finding them represents the goal of the search task. The current research investigated the contributions of both of these types of information to the long-term visual memory representations of search targets. Participants completed either a predefined search or a unique-object search in which targets were not defined with specific categorical labels before searching. Subsequent memory results indicated that search target memory was better than distractor memory even following ambiguously defined searches and when the distractors were viewed significantly longer. Superior target memory appears to result from a qualitatively different representation from those of distractor objects, indicating that decision processes influence visual memory.

  17. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  18. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  19. The Time Course of Similarity Effects in Visual Search

    ERIC Educational Resources Information Center

    Guest, Duncan; Lamberts, Koen

    2011-01-01

    It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks…

  20. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  1. Orthographic versus semantic matching in visual search for words within lists.

    PubMed

    Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas

    2012-03-01

    An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.

  2. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  3. A working memory account of refixations in visual search.

    PubMed

    Shen, Kelly; McIntosh, Anthony R; Ryan, Jennifer D

    2014-12-19

    We tested the hypothesis that active exploration of the visual environment is mediated not only by visual attention but also by visual working memory (VWM) by examining performance in both a visual search and a change detection task. Subjects rarely fixated previously examined distracters during visual search, suggesting that they successfully retained those items. Change detection accuracy decreased with increasing set size, suggesting that subjects had a limited VWM capacity. Crucially, performance in the change detection task predicted visual search efficiency: Higher VWM capacity was associated with faster and more accurate responses as well as lower probabilities of refixation. We found no temporal delay for return saccades, suggesting that active vision is primarily mediated by VWM rather than by a separate attentional disengagement mechanism commonly associated with the inhibition-of-return (IOR) effect. Taken together with evidence that visual attention, VWM, and the oculomotor system involve overlapping neural networks, these data suggest that there exists a general capacity for cognitive processing.

  4. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  5. Competing Distractors Facilitate Visual Search in Heterogeneous Displays.

    PubMed

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments. PMID:27508298

  6. Competing Distractors Facilitate Visual Search in Heterogeneous Displays.

    PubMed

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments.

  7. Competing Distractors Facilitate Visual Search in Heterogeneous Displays

    PubMed Central

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments. PMID:27508298

  8. Usage Patterns of an Online Search System.

    ERIC Educational Resources Information Center

    Cooper, Michael D.

    1983-01-01

    Examines usage patterns of ELHILL retrieval program of National Library of Medicine's MEDLARS system. Based on sample of 6,759 searches, the study analyzes frequency of various commands, classifies messages issued by system, and investigates searcher error rates. Suggestions for redesigning program and query language are noted. Seven references…

  9. Online search patterns: NLM CATLINE database.

    PubMed

    Tolle, J E; Hah, S

    1985-03-01

    In this article the authors present their analysis of the online search patterns within user searching sessions of the National Library of Medicine ELHILL system and examine the user search patterns on the CATLINE database. In addition to the CATLINE analysis, a comparison is made using data previously analyzed on the MEDLINE database for the same time period, thus offering an opportunity to compare the performance parameters of different databases within the same information system. Data collection covers eight weeks and includes 441,282 transactions and over 11,067 user sessions, which accounted for 1680 hours of system usage. The descriptive analysis contained in this report can assists system design activities, while the predictive power of the transaction log analysis methodology may assists the development of real-time aids. PMID:10300015

  10. Visual similarity is stronger than semantic similarity in guiding visual search for numbers.

    PubMed

    Godwin, Hayward J; Hout, Michael C; Menneer, Tamaryn

    2014-06-01

    Using a visual search task, we explored how behavior is influenced by both visual and semantic information. We recorded participants' eye movements as they searched for a single target number in a search array of single-digit numbers (0-9). We examined the probability of fixating the various distractors as a function of two key dimensions: the visual similarity between the target and each distractor, and the semantic similarity (i.e., the numerical distance) between the target and each distractor. Visual similarity estimates were obtained using multidimensional scaling based on the independent observer similarity ratings. A linear mixed-effects model demonstrated that both visual and semantic similarity influenced the probability that distractors would be fixated. However, the visual similarity effect was substantially larger than the semantic similarity effect. We close by discussing the potential value of using this novel methodological approach and the implications for both simple and complex visual search displays.

  11. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  12. Reinforcing saccadic amplitude variability in a visual search task.

    PubMed

    Paeye, Céline; Madelain, Laurent

    2014-11-20

    Human observers often adopt rigid scanning strategies in visual search tasks, even though this may lead to suboptimal performance. Here we ask whether specific levels of saccadic amplitude variability may be induced in a visual search task using reinforcement learning. We designed a new gaze-contingent visual foraging task in which finding a target among distractors was made contingent upon specific saccadic amplitudes. When saccades of rare amplitudes led to displaying the target, the U values (measuring uncertainty) increased by 54.89% on average. They decreased by 41.21% when reinforcing frequent amplitudes. In a noncontingent control group no consistent change in variability occurred. A second experiment revealed that this learning transferred to conventional visual search trials. These results provide experimental support for the importance of reinforcement learning for saccadic amplitude variability in visual search.

  13. A novel visualization model for web search results.

    PubMed

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  14. The Roles of Non-retinotopic Motions in Visual Search.

    PubMed

    Nakayama, Ryohei; Motoyoshi, Isamu; Sato, Takao

    2016-01-01

    In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems-retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.

  15. The Roles of Non-retinotopic Motions in Visual Search

    PubMed Central

    Nakayama, Ryohei; Motoyoshi, Isamu; Sato, Takao

    2016-01-01

    In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search. PMID:27313560

  16. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  17. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  18. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task.

    PubMed

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-07-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects.

  19. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  20. Visual Search in Learning Disabled and Hyperactive Boys.

    ERIC Educational Resources Information Center

    McIntyre, Curtis W.; And Others

    1981-01-01

    To test the suggestion that a deficit in selective attention is characteristic of learning disabled (LD) but not hyperactive (H) children, 72 students (12 LDH, 12 H, and 36 normal Ss) were timed on visual search tasks. (Author)

  1. Do People Take Stimulus Correlations into Account in Visual Search?

    PubMed

    Bhardwaj, Manisha; van den Berg, Ronald; Ma, Wei Ji; Josić, Krešimir

    2016-01-01

    In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks.

  2. 'Where' and 'what' in visual search.

    PubMed

    Atkinson, J; Braddick, O J

    1989-01-01

    A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them. PMID:2771603

  3. Visual pattern recognition in Drosophila is invariant for retinal position.

    PubMed

    Tang, Shiming; Wolf, Reinhard; Xu, Shuping; Heisenberg, Martin

    2004-08-13

    Vision relies on constancy mechanisms. Yet, these are little understood, because they are difficult to investigate in freely moving organisms. One such mechanism, translation invariance, enables organisms to recognize visual patterns independent of the region of their visual field where they had originally seen them. Tethered flies (Drosophila melanogaster) in a flight simulator can recognize visual patterns. Because their eyes are fixed in space and patterns can be displayed in defined parts of their visual field, they can be tested for translation invariance. Here, we show that flies recognize patterns at retinal positions where the patterns had not been presented before. PMID:15310908

  4. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  5. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  6. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

  7. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  8. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  9. Why Is Visual Search Superior in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.

    2009-01-01

    This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…

  10. Identifying a "default" visual search mode with operant conditioning.

    PubMed

    Kawahara, Jun-ichiro

    2010-09-01

    The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.

  11. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  12. Visual Search Asymmetry with Uncertain Targets

    ERIC Educational Resources Information Center

    Saiki, Jun; Koike, Takahiko; Takahashi, Kohske; Inoue, Tomoko

    2005-01-01

    The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of…

  13. A common discrete resource for visual working memory and visual search.

    PubMed

    Anderson, David E; Vogel, Edward K; Awh, Edward

    2013-06-01

    Visual search, a dominant paradigm within attention research, requires observers to rapidly identify targets hidden among distractors. Major models of search presume that working memory (WM) provides the on-line work space for evaluating potential targets. According to this hypothesis, individuals with higher WM capacity should search more efficiently, because they should be able to apprehend a larger number of search elements at a time. Nevertheless, no compelling evidence of such a correlation has emerged, and this null result challenges a growing consensus that there is strong overlap between the neural processes that limit internal storage and those that limit external selection. Here, we provide multiple demonstrations of robust correlations between WM capacity and search efficiency, and we document a key boundary condition for observing this link. Finally, examination of a neural measure of visual selection capacity (the N2pc) demonstrates that visual search and WM storage are constrained by a common discrete resource.

  14. Searching for inhibition of return in visual search: a review.

    PubMed

    Wang, Zhiguo; Klein, Raymond M

    2010-01-01

    Studies that followed the covert and overt probe-following-search paradigms of Klein (1988) and Klein and MacInnes (1999) to explore inhibition of return (IOR) in search are analyzed and evaluated. An IOR effect is consistently observed when the search display (or scene) remains visible when probing and lasts for at least 1000ms or about four previous inspected items (or locations). These findings support the idea that IOR facilitates foraging by discouraging orienting toward previously examined regions and items. Methodological and conceptual issues are discussed leading to methodological recommendations and suggestions for experimentation.

  15. Individual Differences and Metacognitive Knowledge of Visual Search Strategy

    PubMed Central

    Proulx, Michael J.

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions. PMID:22066030

  16. Coloured Overlays, Visual Discomfort, Visual Search and Classroom Reading.

    ERIC Educational Resources Information Center

    Tyrrell, Ruth; And Others

    1995-01-01

    States that 46 children aged 12-16 were shown a page of meaningless text covered with plastic overlays, including 7 that were various colors and 1 that was clear. Explains that each child selected the overlay that made reading easiest. Notes that children who read with a colored overlay complained of visual discomfort when they read without the…

  17. A neural basis for real-world visual search in human occipitotemporal cortex.

    PubMed

    Peelen, Marius V; Kastner, Sabine

    2011-07-19

    Mammals are highly skilled in rapidly detecting objects in cluttered natural environments, a skill necessary for survival. What are the neural mechanisms mediating detection of objects in natural scenes? Here, we use human brain imaging to address the role of top-down preparatory processes in the detection of familiar object categories in real-world environments. Brain activity was measured while participants were preparing to detect highly variable depictions of people or cars in natural scenes that were new to the participants. The preparation to detect objects of the target category, in the absence of visual input, evoked activity patterns in visual cortex that resembled the response to actual exemplars of the target category. Importantly, the selectivity of multivoxel preparatory activity patterns in object-selective cortex (OSC) predicted target detection performance. By contrast, preparatory activity in early visual cortex (V1) was negatively related to search performance. Additional behavioral results suggested that the dissociation between OSC and V1 reflected the use of different search strategies, linking OSC preparatory activity to relatively abstract search preparation and V1 to more specific imagery-like preparation. Finally, whole-brain searchlight analyses revealed that, in addition to OSC, response patterns in medial prefrontal cortex distinguished the target categories based on the search cues alone, suggesting that this region may constitute a top-down source of preparatory activity observed in visual cortex. These results indicate that in naturalistic situations, when the precise visual characteristics of target objects are not known in advance, preparatory activity at higher levels of the visual hierarchy selectively mediates visual search.

  18. A summary statistic representation in peripheral vision explains visual search.

    PubMed

    Rosenholtz, Ruth; Huang, Jie; Raj, Alvin; Balas, Benjamin J; Ilie, Livia

    2012-04-20

    Vision is an active process: We repeatedly move our eyes to seek out objects of interest and explore our environment. Visual search experiments capture aspects of this process, by having subjects look for a target within a background of distractors. Search speed often correlates with target-distractor discriminability; search is faster when the target and distractors look quite different. However, there are notable exceptions. A given discriminability can yield efficient searches (where the target seems to "pop-out") as well as inefficient ones (where additional distractors make search significantly slower and more difficult). Search is often more difficult when finding the target requires distinguishing a particular configuration or conjunction of features. Search asymmetries abound. These puzzling results have fueled three decades of theoretical and experimental studies. We argue that the key issue in search is the processing of image patches in the periphery, where visual representation is characterized by summary statistics computed over a sizable pooling region. By quantifying these statistics, we predict a set of classic search results, as well as peripheral discriminability of crowded patches such as those found in search displays.

  19. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  20. The effect of face inversion on the detection of emotional faces in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V

    2015-01-01

    Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms.

  1. Visual Search and the Collapse of Categorization

    ERIC Educational Resources Information Center

    David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

    2005-01-01

    Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

  2. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  3. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.

    PubMed

    Leff, Daniel R; James, David R C; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure.

  4. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  5. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  6. Conjunctive visual search in individuals with and without mental retardation.

    PubMed

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our goal was to determine the effect of decreasing target-distracter disparities on visual search efficiency. Results showed that search rates for the group with mental retardation and the MA-matched comparisons were more negatively affected by decreasing disparities than were those of the CA-matched group. The group with mental retardation and the MA-matched group performed similarly on all tasks. Implications for theory and application are discussed. PMID:17181391

  7. Group-level differences in visual search asymmetry.

    PubMed

    Cramer, Emily S; Dusko, Michelle J; Rensink, Ronald A

    2016-08-01

    East Asians and Westerners differ in various aspects of perception and cognition. For example, visual memory for East Asians is believed to be more influenced by the contextual aspects of a scene than is the case for Westerners (Masuda & Nisbett in Journal of Personality and Social Psychology, 81, 922-934, 2001). There are also differences in visual search: For Westerners, search is faster for a long line among short ones than for a short line among long ones, whereas this difference does not appear to hold for East Asians (Ueda et al., 2016). However, it is unclear how these group-level differences originate. To investigate the extent to which they depend upon environment, we tested visual search and visual memory in East Asian immigrants who had lived in Canada for different amounts of time. Recent immigrants were found to exhibit no search asymmetry, unlike Westerners who had spent their lives in Canada. However, immigrants who had lived in Canada for more than 2 years showed performance comparable to that of Westerners. These differences could not be explained by the general analytic/holistic processing distinction believed to differentiate Westerners and East Asians, since all observers showed a strong holistic tendency for visual recognition. The results instead support the suggestion that exposure to a new environment can significantly affect the particular processes used to perceive a given stimulus.

  8. Adaptation to a simulated central scotoma during visual search training.

    PubMed

    Walsh, David V; Liu, Lei

    2014-03-01

    Patients with a central scotoma usually use a preferred retinal locus (PRL) consistently in daily activities. The selection process and time course of the PRL development are not well understood. We used a gaze-contingent display to simulate an isotropic central scotoma in normal subjects while they were practicing a difficult visual search task. As compared to foveal search, initial exposure to the simulated scotoma resulted in prolonged search reaction time, many more fixations and unorganized eye movements during search. By the end of a 1782-trial training with the simulated scotoma, the search performance improved to within 25% of normal foveal search. Accompanying the performance improvement, there were also fewer fixations, fewer repeated fixations in the same area of the search stimulus and a clear tendency of using one area near the border of the scotoma to identify the search target. The results were discussed in relation to natural development of PRL in central scotoma patients and potential visual training protocols to facilitate PRL development. PMID:24456805

  9. Visual exploratory search of relationship graphs on smartphones.

    PubMed

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones' user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users' capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era.

  10. History effects in visual search for monsters: search times, choice biases, and liking.

    PubMed

    Chetverikov, Andrey; Kristjansson, Árni

    2015-02-01

    Repeating targets and distractors on consecutive visual search trials facilitates search performance, whereas switching targets and distractors harms search. In addition, search repetition leads to biases in free choice tasks, in that previously attended targets are more likely to be chosen than distractors. Another line of research has shown that attended items receive high liking ratings, whereas ignored distractors are rated negatively. Potential relations between the three effects are unclear, however. Here we simultaneously measured repetition benefits and switching costs for search times, choice biases, and liking ratings in color singleton visual search for "monster" shapes. We showed that if expectations from search repetition are violated, targets are liked to be less attended than otherwise. Choice biases were, on the other hand, affected by distractor repetition, but not by target/distractor switches. Target repetition speeded search times but had little influence on choice or liking. Our findings suggest that choice biases reflect distractor inhibition, and liking reflects the conflict associated with attending to previously inhibited stimuli, while speeded search follows both target and distractor repetition. Our results support the newly proposed affective-feedback-of-hypothesis-testing account of cognition, and additionally, shed new light on the priming of visual search.

  11. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  12. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  13. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  14. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks. PMID:26230367

  15. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  16. Collinear integration affects visual search at V1.

    PubMed

    Chow, Hiu Mei; Jingling, Li; Tseng, Chia-huei

    2013-08-29

    Perceptual grouping plays an indispensable role in figure-ground segregation and attention distribution. For example, a column pops out if it contains element bars orthogonal to uniformly oriented element bars. Jingling and Tseng (2013) have reported that contextual grouping in a column matters to visual search behavior: When a column is grouped into a collinear (snakelike) structure, a target positioned on it became harder to detect than on other noncollinear (ladderlike) columns. How and where perceptual grouping interferes with selective attention is still largely unknown. This article contributes to this little-studied area by asking whether collinear contour integration interacts with visual search before or after binocular fusion. We first identified that the previously mentioned search impairment occurs with a distractor of five or nine elements but not one element in a 9 × 9 search display. To pinpoint the site of this effect, we presented the search display with a short collinear bar (one element) to one eye and the extending collinear bars to the other eye, such that when properly fused, the combined binocular collinear length (nine elements) exceeded the critical length. No collinear search impairment was observed, implying that collinear information before binocular fusion shaped participants' search behavior, although contour extension from the other eye after binocular fusion enhanced the effect of collinearity on attention. Our results suggest that attention interacts with perceptual grouping as early as V1.

  17. Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity.

    PubMed

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2016-02-01

    In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.

  18. Attention Capacity and Task Difficulty in Visual Search

    ERIC Educational Resources Information Center

    Huang, Liqiang; Pashler, Harold

    2005-01-01

    When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of…

  19. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  20. Bumblebee visual search for multiple learned target types.

    PubMed

    Nityananda, Vivek; Pattrick, Jonathan G

    2013-11-15

    Visual search is well studied in human psychology, but we know comparatively little about similar capacities in non-human animals. It is sometimes assumed that animal visual search is restricted to a single target at a time. In bees, for example, this limitation has been evoked to explain flower constancy, the tendency of bees to specialise on a single flower type. Few studies, however, have investigated bee visual search for multiple target types after extended learning and controlling for prior visual experience. We trained colour-naive bumblebees (Bombus terrestris) extensively in separate discrimination tasks to recognise two rewarding colours in interspersed block training sessions. We then tested them with the two colours simultaneously in the presence of distracting colours to examine whether and how quickly they were able to switch between the target colours. We found that bees switched between visual targets quickly and often. The median time taken to switch between targets was shorter than known estimates of how long traces last in bees' working memory, suggesting that their capacity to recall more than one learned target was not restricted by working memory limitations. Following our results, we propose a model of memory and learning that integrates our findings with those of previous studies investigating flower constancy.

  1. Content-Based Visual Landmark Search via Multimodal Hypergraph Learning.

    PubMed

    Zhu, Lei; Shen, Jialie; Jin, Hai; Zheng, Ran; Xie, Liang

    2015-12-01

    While content-based landmark image search has recently received a lot of attention and became a very active domain, it still remains a challenging problem. Among the various reasons, high diverse visual content is the most significant one. It is common that for the same landmark, images with a wide range of visual appearances can be found from different sources and different landmarks may share very similar sets of images. As a consequence, it is very hard to accurately estimate the similarities between the landmarks purely based on single type of visual feature. Moreover, the relationships between landmark images can be very complex and how to develop an effective modeling scheme to characterize the associations still remains an open question. Motivated by these concerns, we propose multimodal hypergraph (MMHG) to characterize the complex associations between landmark images. In MMHG, images are modeled as independent vertices and hyperedges contain several vertices corresponding to particular views. Multiple hypergraphs are firstly constructed independently based on different visual modalities to describe the hidden high-order relations from different aspects. Then, they are integrated together to involve discriminative information from heterogeneous sources. We also propose a novel content-based visual landmark search system based on MMHG to facilitate effective search. Distinguished from the existing approaches, we design a unified computational module to support query-specific combination weight learning. An extensive experiment study on a large-scale test collection demonstrates the effectiveness of our scheme over state-of-the-art approaches.

  2. Enhancing visual search abilities of people with intellectual disabilities.

    PubMed

    Li-Tsang, Cecilia W P; Wong, Jackson K K

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.

  3. Early activation of object names in visual search.

    PubMed

    Meyer, Antje S; Belke, Eva; Telling, Anna L; Humphreys, Glyn W

    2007-08-01

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.

  4. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search.

    PubMed

    Müller, Notger G; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  5. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search

    PubMed Central

    Müller, Notger G.; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  6. Irrelevant objects of expertise compete with faces during visual search

    PubMed Central

    McGugin, Rankin W.; McKeeff, Thomas J.; Tong, Frank; Gauthier, Isabel

    2010-01-01

    Prior work suggests that non-face objects of expertise can interfere with the perception of faces when the two categories are alternately presented, suggesting competition for shared perceptual resources. Here we ask whether task-irrelevant distractors from a category of expertise compete when faces are presented in a standard visual search task. Participants searched for a target (face or sofa) in an array containing both relevant and irrelevant distractors. The number of distractors from the target category (face or sofa) remained constant, while the number of distractors from the irrelevant category (cars) varied. Search slopes, calculated as a function of the number of irrelevant cars, were correlated with car expertise. The effect was not due to car distractors grabbing attention because they did not compete with sofa targets. Objects of expertise interfere with face perception even when they are task irrelevant, visually distinct and separated in space from faces. PMID:21264705

  7. The Mechanisms Underlying the ASD Advantage in Visual Search.

    PubMed

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.

  8. Crowded visual search in children with normal vision and children with visual impairment.

    PubMed

    Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke

    2014-03-01

    This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search.

  9. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion.

  10. LASAGNA-Search: an integrated web tool for transcription factor binding site search and visualization.

    PubMed

    Lee, Chic; Huang, Chun-Hsi

    2013-03-01

    The release of ChIP-seq data from the ENCyclopedia Of DNA Elements (ENCODE) and Model Organism ENCyclopedia Of DNA Elements (modENCODE) projects has significantly increased the amount of transcription factor (TF) binding affinity information available to researchers. However, scientists still routinely use TF binding site (TFBS) search tools to scan unannotated sequences for TFBSs, particularly when searching for lesser-known TFs or TFs in organisms for which ChIP-seq data are unavailable. The sequence analysis often involves multiple steps such as TF model collection, promoter sequence retrieval, and visualization; thus, several different tools are required. We have developed a novel integrated web tool named LASAGNA-Search that allows users to perform TFBS searches without leaving the web site. LASAGNA-Search uses the LASAGNA (Length-Aware Site Alignment Guided by Nucleotide Association) algorithm for TFBS alignment. Important features of LASAGNA-Search include (i) acceptance of unaligned variable-length TFBSs, (ii) a collection of 1726 TF models, (iii) automatic promoter sequence retrieval, (iv) visualization in the UCSC Genome Browser, and (v) gene regulatory network inference and visualization based on binding specificities. LASAGNA-Search is freely available at http://biogrid.engr.uconn.edu/lasagna_search/.

  11. The long and the short of priming in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2015-07-01

    Memory affects visual search, as is particularly evident from findings that when target features are repeated from one trial to the next, selection is faster. Two views have emerged on the nature of the memory representations and mechanisms that cause these intertrial priming effects: independent feature weighting versus episodic retrieval of previous trials. Previous research has attempted to disentangle these views focusing on short term effects. Here, we illustrate that the episodic retrieval models make the unique prediction of long-term priming: biasing one target type will result in priming of this target type for a much longer time, well after the bias has disappeared. We demonstrate that such long-term priming is indeed found for the visual feature of color, but only in conjunction search and not in singleton search. Two follow-up experiments showed that it was the kind of search (conjunction versus singleton) and not the difficulty, that determined whether long-term priming occurred. Long term priming persisted unaltered for at least 200 trials, and could not be explained as the result of explicit strategy. We propose that episodic memory may affect search more consistently than previously thought, and that the mechanisms for intertrial priming may be qualitatively different for singleton and conjunction search. PMID:25832185

  12. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  13. Visual Information and Support Surface for Postural Control in Visual Search Task.

    PubMed

    Huang, Chia-Chun; Yang, Chih-Mei

    2016-10-01

    When standing on a reduced support surface, people increase their reliance on visual information to control posture. This assertion was tested in the current study. The effects of imposed motion and support surface on postural control during visual search were investigated. Twelve participants (aged 21 ± 1.8 years; six men and six women) stood on a reduced support surface (45% base of support). In a room that moved back and forth along the anteroposterior axis, participants performed visual search for a given letter in an article. Postural sway variability and head-room coupling were measured. The results of head-room coupling, but not postural sway, supported the assertion that people increase reliance on visual information when standing on a reduced support surface. Whether standing on a whole or reduced surface, people stabilized their posture to perform the visual search tasks. Compared to a fixed target, searching on a hand-held target showed greater head-room coupling when standing on a reduced surface.

  14. Searching through subsets: a test of the visual indexing hypothesis.

    PubMed

    Burkell, J A; Pylyshyn, Z W

    1997-01-01

    This paper presents three experiments investigating the claim that the visual system utilizes a primitive indexing mechanism (sometimes called FINSTS; Pylyshyn, 1989) to make non-contiguous features directly accessible for further visual processing. This claim is investigated using a variant of the conjunction search task in which subjects search among a subset of the items in a conjunction search display for targets defined by a conjunction of colour and orientation. The members of the subset were identified by virtue of the late onset of the objects' place-holders. The cued subset was manipulated to include either homogeneous distractors or mixed distractors. Observers were able to select a subset of three items from among fifteen for further processing (Experiment 1); furthermore, a reaction time advantage for homogeneous subsets over mixed subsets was observed, indicating that more than one of the subset is selected for further specialized processing. The homogeneous subset advantage held for subsets of two to five items (Experiment 2), and the time required to process the cued subset did not increase with increased dispersion of the items (Experiment 3). These results support the basic claim of the indexing theory: the claim that multiple visual indexes are used in selecting objects for visual processing. PMID:9428097

  15. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  16. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  17. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  18. The Efficiency of a Visual Skills Training Program on Visual Search Performance.

    PubMed

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-06-27

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance.

  19. Explicit awareness supports conditional visual search in the retrieval guidance paradigm.

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P

    2014-01-01

    In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of

  20. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.

  1. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention. PMID:27055458

  2. Visualizing Motion Patterns in Acupuncture Manipulation.

    PubMed

    Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung

    2016-01-01

    Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system. PMID:27501193

  3. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  4. Impact of patient photos on visual search during radiograph interpretation

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Applegate, Kimberly; DeSimone, Ariadne; Chung, Alex; Tridandanpani, Srini

    2016-03-01

    To increase detection of mislabeled medical imaging studies evidence shows it may be useful to include patient photographs during interpretation. This study examined how inclusion of photos impacts visual search. Ten radiologists viewed 21 chest radiographs with and without a photo of the patient while search was recorded. Their task was to note tube/line placement. Eye-tracking data revealed that presence of the photo reduced the number of fixations and total dwell on the chest image as a result of periodically looking at the photo. Average preference for having photos was 6.10 on 0-10 scale and neck and chest were preferred areas.

  5. Visual Inquiry Toolkit – An Integrated Approach for Exploring and Interpreting Space-Time, Multivariate Patterns

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2011-01-01

    While many datasets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and scalability issues. This study develops a visual analytics approach that integrates human knowledge and judgments with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate datasets. Specifically, a variety of methods are employed for data clustering, pattern searching, information visualization and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant and potentially useful information that is difficult to detect by any method used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analysis of a dataset containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:26566543

  6. Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2009-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

  7. Supporting the Process of Exploring and Interpreting Space-Time Multivariate Patterns: The Visual Inquiry Toolkit.

    PubMed

    Chen, Jin; Maceachren, Alan M; Guo, Diansheng

    2008-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries.

  8. Animation of orthogonal texture patterns for vector field visualization.

    PubMed

    Bachthaler, Sven; Weiskopf, Daniel

    2008-01-01

    This paper introduces orthogonal vector field visualization on 2D manifolds: a representation by lines that are perpendicular to the input vector field. Line patterns are generated by line integral convolution (LIC). This visualization is combined with animation based on motion along the vector field. This decoupling of the line direction from the direction of animation allows us to choose the spatial frequencies along the direction of motion independently from the length scales along the LIC line patterns. Vision research indicates that local motion detectors are tuned to certain spatial frequencies of textures, and the above decoupling enables us to generate spatial frequencies optimized for motion perception. Furthermore, we introduce a combined visualization that employs orthogonal LIC patterns together with conventional, tangential streamline LIC patterns in order to benefit from the advantages of these two visualization approaches. In addition, a filtering process is described to achieve a consistent and temporally coherent animation of orthogonal vector field visualization. Different filter kernels and filter methods are compared and discussed in terms of visualization quality and speed. We present respective visualization algorithms for 2D planar vector fields and tangential vector fields on curved surfaces, and demonstrate that those algorithms lend themselves to efficient and interactive GPU implementations. PMID:18467751

  9. Electrophysiological evidence of semantic interference in visual search.

    PubMed

    Telling, Anna L; Kumar, Sanjay; Meyer, Antje S; Humphreys, Glyn W

    2010-10-01

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.

  10. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  11. Memory for Where, but Not What, Is Used during Visual Search

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Peterson, Matthew S.; Vomela, Miroslava

    2006-01-01

    Although the role of memory in visual search is debatable, most researchers agree with a limited-capacity model of memory in visual search. The authors demonstrate the role of memory by replicating previous findings showing that visual search is biased away from old items (previously examined items) and toward new items (nonexamined items).…

  12. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  13. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  14. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  15. Pattern reversal visual evoked potentials in phenylketonuria.

    PubMed

    Giovannini, M; Valsasina, R; Villani, R; Ducati, A; Riva, E; Landi, A; Longhi, R

    1988-01-01

    The pathogenesis of brain dysfunction in phenylketonuria (PKU) is still under investigation. Hyperphenylalaninaemia results in increased turnover of myelin. In order to demonstrate the derangement of myelinization in PKU we studied the visual evoked potentials (VEP) in 14 PKU patients and in 20 normal subjects. VEP findings were correlated with the metabolic control of the disease and with the electroencephalographic findings. VEP were more sensitive than the EEG in detecting a neurological dysfunction. VEP are influenced by dietary control and are normal only in children with good metabolic control.

  16. Reading and Visual Search: A Developmental Study in Normal Children

    PubMed Central

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

  17. Reading and visual search: a developmental study in normal children.

    PubMed

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence.

  18. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  19. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  20. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  1. Automatic guidance of attention during real-world visual search

    PubMed Central

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  2. Human visual pattern recognition of medical images

    NASA Astrophysics Data System (ADS)

    Biederman, Irving

    1990-07-01

    The output of most medical imaging systems is a display for interpretation by human observers. This paper provides a general summary of recent work on shape recognition by humans. Two broad modes of visual image processing executed by different cortical loci can be distinguished: a) a mode for motor interaction which is sensitive to quantitative variation in image parameters and b) a mode for basic-level object recognition which is based on a small set of qualitative contrasts in viewpoint invariant properties of images edges. Many medical image classifications pose inherently difficult problems for the recognition system in that they are based on quantitative and surface patch variations--rather than qualitative--variations. But when recognition can be achieved quickly and accurately it is possible that a small viewpoint invariant contrast has been discovered and is being exploited by the interpreter.

  3. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions.

  4. Neural representations of contextual guidance in visual search of real-world scenes.

    PubMed

    Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P

    2013-05-01

    Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. PMID:23637176

  5. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  6. Dynamic topography of pattern visual evoked potentials (PVEP) in psychogenic visual loss patients.

    PubMed

    Nakamura, A; Tabuchi, A; Matsuda, E; Yamaguchi, W

    2000-09-01

    We investigated to measure the objective visual acuity using pattern visual evoked potentials (PVEP) to help the diagnosis with psychogenic visual loss (PVL) who ranged in age from 7 to 14 years old. Pattern stimuli consisted of black and white checkerboard patterns (39, 26, 15 and 9') with a visual angle of 8 degrees and a contrast level of 15%. The pattern reversal frequency was 0.7 Hz. This resulted in an average of 100 PVEP per session. Visual acuity of 0.1 was consistent with the 39' pattern, 0.2 with the 26' pattern, 0.5 with the 15' pattern, and 1.0 with the 9' pattern. As the results, five PVL patients could measure visual acuity with this method in the present study. The PVEP is useful in evaluating the visual acuity and helped to diagnose the PVL patients. In addition we used the dynamic topography to study the difference in the results of the PVEP. The dynamic topography obtained from the results of the PVEP was analyzed. The flow type of the P100 component diverged into three types (separated type, hollow type and localized type) in the PVL patients and the normal children. The localized type was observed in 59.1% of normal children and in 56.3% of PVL patients. While the separated type was shown in 6.8% of normal children and in 8.3% of PVL patients. There were not significant differences between the PVL patients and the normal children in each type.

  7. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

  8. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters.

  9. Information-limited parallel processing in difficult heterogeneous covert visual search.

    PubMed

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-10-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial search mechanisms. Performance is measured for difficult and error-prone searches among heterogeneous background elements and for easy and accurate searches among homogeneous background elements. Contrary to the claims of time-limited serial attention, searches in heterogeneous backgrounds instead exhibited nearly identical search dynamics for display sizes up to 12 items. A review and new analyses indicate that most difficult as well as easy visual searches operate as an unlimited-capacity parallel analysis over the visual field within a single eye fixation, which suggests limitations in the availability of information, not temporal bottlenecks in analysis or comparison. Serial properties likely reflect overt attention expressed in eye movements.

  10. Visual Search Revived: The Slopes Are Not That Slippery: A Reply to Kristjansson (2015).

    PubMed

    Wolfe, Jeremy M

    2016-05-01

    Kristjansson (2015) suggests that standard research methods in the study of visual search should be "reconsidered." He reiterates a useful warning against treating reaction time x set size functions as simple metrics that can be used to label search tasks as "serial" or "parallel." However, I argue that he goes too far with a broad attack on the use of slopes in the study of visual search. Used wisely, slopes do provide us with insight into the mechanisms of visual search.

  11. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  12. eSeeTrack--visualizing sequential fixation patterns.

    PubMed

    Tsang, Hoi Ying; Tory, Melanie; Swindells, Colin

    2010-01-01

    We introduce eSeeTrack, an eye-tracking visualization prototype that facilitates exploration and comparison of sequential gaze orderings in a static or a dynamic scene. It extends current eye-tracking data visualizations by extracting patterns of sequential gaze orderings, displaying these patterns in a way that does not depend on the number of fixations on a scene, and enabling users to compare patterns from two or more sets of eye-gaze data. Extracting such patterns was very difficult with previous visualization techniques. eSeeTrack combines a timeline and a tree-structured visual representation to embody three aspects of eye-tracking data that users are interested in: duration, frequency and orderings of fixations. We demonstrate the usefulness of eSeeTrack via two case studies on surgical simulation and retail store chain data. We found that eSeeTrack allows ordering of fixations to be rapidly queried, explored and compared. Furthermore, our tool provides an effective and efficient mechanism to determine pattern outliers. This approach can be effective for behavior analysis in a variety of domains that are described at the end of this paper.

  13. Task-irrelevant stimulus salience affects visual search.

    PubMed

    Lamy, Dominique; Zoaris, Loren

    2009-05-01

    The relative contributions of stimulus salience and task-related goals in guiding attention remain an issue of debate. Several studies have demonstrated that top-down factors play an important role, as they often override capture by salient irrelevant objects. However, Yantis and Egeth [Yantis, S., & Egeth, H. E. (1999). On the distinction between visual salience and stimulus-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance, 25, 661-676.] have made the more radical claim that salience plays no role in visual search unless the observer adopts an attentional set for singletons or "singleton-detection mode". We reexamine their claim while disentangling effects of stimulus salience from effects of attentional set and inter-trial repetition. The results show that stimulus salience guides attention even when salience is task irrelevant.

  14. Audio-visual object search is changed by bilingual experience.

    PubMed

    Chabal, Sarah; Schroeder, Scott R; Marian, Viorica

    2015-11-01

    The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment.

  15. WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.

    PubMed

    Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno

    2012-09-01

    The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.

  16. Audio-Visual Object Search is Changed by Bilingual Experience

    PubMed Central

    Chabal, Sarah; Schroeder, Scott R.; Marian, Viorica

    2015-01-01

    The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye-movements revealed that this speed advantage was driven by bilinguals’ ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals’, but not monolinguals’, object-finding ability was positively associated with their executive control ability. We conclude that bilinguals’ executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. PMID:26272368

  17. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  18. The influence of cast shadows on visual search.

    PubMed

    Rensink, Ronald A; Cavanagh, Patrick

    2004-01-01

    We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. PMID:15693675

  19. Hemodynamic change in occipital lobe during visual search: visual attention allocation measured with NIRS.

    PubMed

    Kojima, Haruyuki; Suzuki, Takeshi

    2010-01-01

    We examined the changes in regional cerebral blood volume (rCBV) around visual cortex using Near Infrared Spectroscopy (NIRS) when observers attended to visual scenes. The oxygenated and deoxygenated hemoglobin (Oxy-Hb and Deoxy-Hb) concentration changes at occipital lobe were monitored during a dual task. Observers were asked to name a digit superimposed on a scenery picture, while in parallel, they had to detect an on-and-off flickering object in a Change Blindness paradigm. Results showed the typical activation patterns in and around the visual cortex with increases in Oxy-Hb and decreases in Deoxy-Hb. The Oxy-Hb increase doubled when observers could not find the target, as opposed to trials in which they could. The results strongly suggest that active attention to a visual scene enhances Oxy-Hb change much stronger than passive watching, and that attention and Oxy-Hb increases are possibly correlated.

  20. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  1. Visualizing Information in the Biological Sciences: Using WebTheme to Visualize Internet Search Results

    SciTech Connect

    Buxton, Karen A.; Lembo, Mary Frances

    2003-08-11

    Information visualization is an effective method for displaying large data sets in a pictorial or graphical format. The visualization aids researchers and analysts in understanding data by evaluating the content and grouping documents together around themes and concepts. With the ever-growing amount of information available on the Internet, additional methods are needed to analyze and interpret data. WebTheme allows users to harvest thousands of web pages and automatically organize and visualize their contents. WebTheme is an interactive web-based product that provides a new way to investigate and understand large volumes of HTML text-based information. It has the ability to harvest data from the World Wide Web using search terms and selected search engines or by following URLs chosen by the user. WebTheme enables users to rapidly identify themes and concepts found among thousands of pages of text harvested and provides a suite of tools to further explore and analyze special areas of interest within a data set. WebTheme was developed at Pacific Northwest National Laboratory (PNNL) for NASA as a method for generating meaningful, thematic, and interactive visualizations. Through a collaboration with the Laboratory's Information Science and Engineering (IS&E) group, information specialists are providing demonstrations of WebTheme and assisting researchers in analyzing their results. This paper will provide a brief overview of the WebTheme product, and the ways in which the Hanford Technical Library's information specialists are assisting researchers in using this product.

  2. Evolving the stimulus to fit the brain: a genetic algorithm reveals the brain's feature priorities in visual search.

    PubMed

    Van der Burg, Erik; Cass, John; Theeuwes, Jan; Alais, David

    2015-02-06

    How does the brain find objects in cluttered visual environments? For decades researchers have employed the classic visual search paradigm to answer this question using factorial designs. Although such approaches have yielded important information, they represent only a tiny fraction of the possible parametric space. Here we use a novel approach, by using a genetic algorithm (GA) to discover the way the brain solves visual search in complex environments, free from experimenter bias. Participants searched a series of complex displays, and those supporting fastest search were selected to reproduce (survival of the fittest). Their display properties (genes) were crossed and combined to create a new generation of "evolved" displays. Displays evolved quickly over generations towards a stable, efficiently searched array. Color properties evolved first, followed by orientation. The evolved displays also contained spatial patterns suggesting a coarse-to-fine search strategy. We argue that this behavioral performance-driven GA reveals the way the brain selects information during visual search in complex environments. We anticipate that our approach can be adapted to a variety of sensory and cognitive questions that have proven too intractable for factorial designs.

  3. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia.

  4. Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing

    ERIC Educational Resources Information Center

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-01-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…

  5. Discovering Visual Scanning Patterns in a Computerized Cancellation Test

    ERIC Educational Resources Information Center

    Huang, Ho-Chuan; Wang, Tsui-Ying

    2013-01-01

    The purpose of this study was to develop an attention sequential mining mechanism for investigating the sequential patterns of children's visual scanning process in a computerized cancellation test. Participants had to locate and cancel the target amongst other non-targets in a structured form, and a random form with Chinese stimuli. Twenty-three…

  6. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  7. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  8. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  9. Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets

    ERIC Educational Resources Information Center

    Starreveld, Peter A.; Theeuwes, Jan; Mortier, Karen

    2004-01-01

    The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those…

  10. Electroencephalogram assessment of mental fatigue in visual search.

    PubMed

    Fan, Xiaoli; Zhou, Qianxiang; Liu, Zhongqi; Xie, Fang

    2015-01-01

    Mental fatigue is considered to be a contributing factor responsible for numerous road accidents and various medical conditions and the efficiency and performance could be impaired during fatigue. Hence, determining how to evaluate mental fatigue is very important. In the present study, ten subjects performed a long-term visual search task with electroencephalogram recorded, and self-assessment and reaction time (RT) were combined to verify if mental fatigue had been induced and were also used as confirmatory tests for the proposed measures. The changes in relative energy in four wavebands (δ,θ,α, and β), four ratio formulas [(α+θ)/β,α/β,(α+θ)/(α+β), and θ/β], and Shannon's entropy (SE) were compared and analyzed between the beginning and end of the task. The results showed that a significant increase occurred in alpha activity in the frontal, central, posterior temporal, parietal, and occipital lobes, and a dip occurred in the beta activity in the pre-frontal, inferior frontal, posterior temporal, and occipital lobes. The ratio formulas clearly increased in all of these brain regions except the temporal region, where only α/β changed obviously after finishing the 60-min visual search task. SE significantly increased in the posterior temporal, parietal, and occipital lobes. These results demonstrate some potential indicators for mental fatigue detection and evaluation, which can be applied in the future development of countermeasures to fatigue. PMID:26405908

  11. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  12. Visual-search observers for SPECT simulations with clinical backgrounds

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2016-03-01

    The purpose of this work was to test the ability of visual-search (VS) model observers to predict the lesion- detection performance of human observers with hybrid SPECT images. These images consist of clinical back- grounds with simulated abnormalities. The application of existing scanning model observers to hybrid images is complicated by the need for extensive statistical information, whereas VS models based on separate search and analysis processes may operate with reduced knowledge. A localization ROC (LROC) study involved the detection and localization of solitary pulmonary nodules in Tc-99m lung images. The study was aimed at op- timizing the number of iterations and the postfiltering of four rescaled block-iterative reconstruction strategies. These strategies implemented different combinations of attenuation correction, scatter correction, and detector resolution correction. For a VS observer in this study, the search and analysis processes were guided by a single set of base morphological features derived from knowledge of the lesion profile. One base set used difference-of- Gaussian channels while a second base set implemented spatial derivatives in combination with the Burgess eye filter. A feature-adaptive VS observer selected features of interest for a given image set on the basis of training-set performance. A comparison of the feature-adaptive observer results against previously acquired human-observer data is presented.

  13. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls

  14. Pattern-visual evoked potentials in thinner abusers.

    PubMed

    Poblano, A; Lope Huerta, M; Martínez, J M; Falcón, H D

    1996-01-01

    Organic solvents cause injury to lipids of neuronal and glial membranes. A well known characteristic of workers exposed to thinner is optic neuropathy. We decided to look for neurophysiologic signs of visual damage in patients identified as thinner abusers. Pattern reversal visual evoked potentials was performed on 34 thinner abuser patients and 30 controls. P-100 wave latency was found to be longer on abuser than control subjects. Results show the possibility of central alterations on thinner abusers despite absence of clinical symptoms. PMID:8987190

  15. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes.

  16. Age-related changes in conjunctive visual search in children with and without ASD.

    PubMed

    Iarocci, Grace; Armstrong, Kimberly

    2014-04-01

    Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD.

  17. Is a search template an ordinary working memory? Comparing electrophysiological markers of working memory maintenance for visual search and recognition.

    PubMed

    Gunseli, Eren; Meeter, Martijn; Olivers, Christian N L

    2014-07-01

    Visual search requires the maintenance of a search template in visual working memory in order to guide attention towards the target. This raises the question whether a search template is essentially the same as a visual working memory representation used in tasks that do not require attentional guidance, or whether it is a qualitatively different representation. Two experiments tested this by comparing electrophysiological markers of visual working memory maintenance between simple recognition and search tasks. For both experiments, responses were less rapid and less accurate in search task than in simple recognition. Nevertheless, the contralateral delay activity (CDA), an index of quantity and quality of visual working memory representations, was equal across tasks. On the other hand, the late positive complex (LPC), which is sensitive to the effort invested in visual working memory maintenance, was greater for the search task than the recognition task. Additionally, when the same target cue was repeated across trials (Experiment 2), the amplitude of visual working memory markers (both CDA and LPC) decreased, demonstrating learning of the target at an equal rate for both tasks. Our results suggest that a search template is qualitatively the same as a representation used for simple recognition, but greater effort is invested in its maintenance.

  18. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  19. The pattern visual evoked potential and pattern electroretinogram in drusen-associated optic neuropathy.

    PubMed

    Scholl, G B; Song, H S; Winkler, D E; Wray, S H

    1992-01-01

    Sixteen patients (29 eyes) with optic disc drusen were studied prospectively for clinical and electrophysiologic evidence of impaired optic nerve conduction. Abnormalities were detected in the following areas: visual acuity, eight (28%) of 29 eyes; kinetic visual field, 22 (76%) of 29 eyes; results of Farnsworth-Munsell 100-Hue test, 12 (41%) of 29 eyes; and flash visual evoked potential, 13 (54%) of 24 eyes. Simultaneous pattern visual evoked potentials and results of pattern electroretinograms were recorded. The P100 latency of the pattern visual evoked potential was prolonged in 41% of eyes. The P50 and N95 components of the pattern electroretinogram were also analyzed. The P50 amplitude was reduced in only four (17%) of 24 eyes. The most common abnormality was a reduction in amplitude or the absence of the N95 component in 19 (79%) of 24 eyes, reflecting ganglion cell dysfunction. The data support mounting evidence that the P50 and N95 components of the pattern electroretinogram have different retinal origins. PMID:1731726

  20. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order

  1. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  2. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms

    PubMed Central

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H.; Baron-Cohen, Simon; Bolton, Patrick; Cheung, Celeste; Davies, Kim; Liew, Michelle; Fernandes, Janice; Gammer, Issy; Maris, Helen; Salomone, Erica; Pasco, Greg; Pickles, Andrew; Ribeiro, Helena; Tucker, Leslie

    2015-01-01

    Summary In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception [1]. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils [2–5]. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated [4, 6]. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a “fractionable” phenotype model of autism [7]. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves [8], and up to another 30% manifest elevated levels of autism symptoms [9]. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI; [10]), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS; [11]). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  3. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  4. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  5. Role of computer-assisted visual search in mammographic interpretation

    NASA Astrophysics Data System (ADS)

    Nodine, Calvin F.; Kundel, Harold L.; Mello-Thoms, Claudia; Weinstein, Susan P.

    2001-06-01

    We used eye-position data to develop Computer-Assisted Visual Search (CAVS) as an aid to mammographic interpretation. CAVS feeds back regions of interest that receive prolonged visual dwell (greater than or equal to 1000 ms) by highlighting them on the mammogram. These regions are then reevaluated for possible missed breast cancers. Six radiology residents and fellows interpreted a test set of 40 mammograms twice, once with CAVS feedback (FB), and once without CAVS FB in a crossover, repeated- measures design. Eye position was monitored. LROC performance (area) was compared with and without CAVS FB. Detection and localization of malignant lesions improved 12% with CAVS FB. This was not significant. The test set contained subtle malignant lesions. 65% (176/272) of true lesions were fixated. Of those fixated, 49% (87/176) received prolonged attention resulting in CAVS FB, and 54% (47/87) of FBs resulted in TPs. Test-set difficulty and the lack of reading experience of the readers may have contributed to the relatively low overall performance, and may have also limited the effectiveness of CAVS FB which could only play a role in localizing potential lesions if the reader fixated and dwelled on them.

  6. Case role filling as a side effect of visual search

    SciTech Connect

    Marburger, H.; Wahlster, W.

    1983-01-01

    This paper addresses the problem of generating communicatively adequate extended responses in the absence of specific knowledge concerning the intentions of the questioner. The authors formulate and justify a heuristic for the selection of optional deep case slots not contained in the question as candidates for the additional information contained in an extended response. It is shown that, in a visually present domain of discourse, case role filling for the construction of an extended response can be regarded as a side effect of the visual search necessary to answer a question containing a locomotion verb. The paper describes the various representation constructions used in the German language dialog system HAM-ANS for dealing with the semantics of locomotion verbs and illustrates their use in generating extended responses. In particular, it outlines the structure of the geometrical scene description, the representation of events in a logic-oriented semantic representation language, the case-frame lexicon and the representation of the referential semantics based on the flavor system. The emphasis is on a detailed presentation of the application of object-oriented programming methods for coping with the semantics of locomotion verbs. The process of generating an extended response is illustrated by an extensively annotated trace. 13 references.

  7. Expectations developed over multiple timescales facilitate visual search performance

    PubMed Central

    Gekas, Nikos; Seitz, Aaron R.; Seriès, Peggy

    2015-01-01

    Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level. PMID:26200891

  8. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  9. Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting.

    PubMed

    Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith

    2002-03-01

    We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics.

  10. Efficient visual-search model observers for PET

    PubMed Central

    2014-01-01

    Objective: Scanning model observers have been efficiently applied as a research tool to predict human-observer performance in F-18 positron emission tomography (PET). We investigated whether a visual-search (VS) observer could provide more reliable predictions with comparable efficiency. Methods: Simulated two-dimensional images of a digital phantom featuring tumours in the liver, lungs and background soft tissue were prepared in coronal, sagittal and transverse display formats. A localization receiver operating characteristic (LROC) study quantified tumour detectability as a function of organ and format for two human observers, a channelized non-prewhitening (CNPW) scanning observer and two versions of a basic VS observer. The VS observers compared watershed (WS) and gradient-based search processes that identified focal uptake points for subsequent analysis with the CNPW observer. The model observers treated “background-known-exactly” (BKE) and “background-assumed-homogeneous” assumptions, either searching the entire organ of interest (Task A) or a reduced area that helped limit false positives (Task B). Performance was indicated by area under the LROC curve. Concordance in the localizations between observers was also analysed. Results: With the BKE assumption, both VS observers demonstrated consistent Pearson correlation with humans (Task A: 0.92 and Task B: 0.93) compared with the scanning observer (Task A: 0.77 and Task B: 0.92). The WS VS observer read 624 study test images in 2.0 min. The scanning observer required 0.7 min. Conclusion: Computationally efficient VS can enhance the stability of statistical model observers with regard to uncertainties in PET tumour detection tasks. Advances in knowledge: VS models improve concordance with human observers. PMID:24837105

  11. Spatial and temporal dynamics of visual search tasks distinguish subtypes of unilateral spatial neglect: Comparison of two cases with viewer-centered and stimulus-centered neglect.

    PubMed

    Mizuno, Katsuhiro; Kato, Kenji; Tsuji, Tetsuya; Shindo, Keiichiro; Kobayashi, Yukiko; Liu, Meigen

    2016-08-01

    We developed a computerised test to evaluate unilateral spatial neglect (USN) using a touchscreen display, and estimated the spatial and temporal patterns of visual search in USN patients. The results between a viewer-centered USN patient and a stimulus-centered USN patient were compared. Two right-brain-damaged patients with USN, a patient without USN, and 16 healthy subjects performed a simple cancellation test, the circle test, a visuomotor search test, and a visual search test. According to the results of the circle test, one USN patient had stimulus-centered neglect and a one had viewer-centered neglect. The spatial and temporal patterns of these two USN patients were compared. The spatial and temporal patterns of cancellation were different in the stimulus-centered USN patient and the viewer-centered USN patient. The viewer-centered USN patient completed the simple cancellation task, but paused when transferring from the right side to the left side of the display. Unexpectedly, this patient did not exhibit rightward attention bias on the visuomotor and visual search tests, but the stimulus-centered USN patient did. The computer-based assessment system provided information on the dynamic visual search strategy of patients with USN. The spatial and temporal pattern of cancellation and visual search were different across the two patients with different subtypes of neglect.

  12. Spontaneous pattern formation and pinning in the visual cortex

    NASA Astrophysics Data System (ADS)

    Baker, Tanya I.

    Bifurcation theory and perturbation theory can be combined with a knowledge of the underlying circuitry of the visual cortex to produce an elegant story explaining the phenomenon of visual hallucinations. A key insight is the application of an important set of ideas concerning spontaneous pattern formation introduced by Turing in 1952. The basic mechanism is a diffusion driven linear instability favoring a particular wavelength that determines the size of the ensuing stripe or spot periodicity of the emerging spatial pattern. Competition between short range excitation and longer range inhibition in the connectivity profile of cortical neurons provides the difference in diffusion length scales necessary for the Turing mechanism to occur and has been proven by Ermentrout and Cowan to be sufficient to explain the generation of a subset of reported geometric hallucinations. Incorporating further details of the cortical circuitry, namely that neurons are also weakly connected to other neurons sharing a particular stimulus orientation or spatial frequency preference at even longer ranges and the resulting shift-twist symmetry of the neuronal connectivity, improves the story. We expand this approach in order to be able to include the tuned responses of cortical neurons to additional visual stimulus features such as motion, color and disparity. We apply a study of nonlinear dynamics similar to the analysis of wave propagation in a crystalline lattice to demonstrate how a spatial pattern formed through the Turing instability can be pinned to the geometric layout of various feature preferences. The perturbation analysis is analogous to solving the Schrodinger equation in a weak periodic potential. Competition between the local isotropic connections which produce patterns of activity via the Turing mechanism and the weaker patchy lateral connections that depend on a neuron's particular set of feature preferences create long wavelength affects analogous to commensurate

  13. Task specificity and the influence of memory on visual search: comment on Võ and Wolfe (2012).

    PubMed

    Hollingworth, Andrew

    2012-12-01

    Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search.

  14. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  15. Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

  16. Visual search disorders in acute and chronic homonymous hemianopia: lesion effects and adaptive strategies.

    PubMed

    Machner, Björn; Sprenger, Andreas; Sander, Thurid; Heide, Wolfgang; Kimmig, Hubert; Helmchen, Christoph; Kömpf, Detlef

    2009-05-01

    Patients with homonymous hemianopia due to occipital brain lesions show disorders of visual search. In everyday life this leads to difficulties in reading and spatial orientation. It is a matter of debate whether these disorders are due to the brain lesion or rather reflect compensatory eye movement strategies developing over time. For the first time, eye movements of acute hemianopic patients (n= 9) were recorded during the first days following stroke while they performed an exploratory visual-search task. Compared to age-matched control subjects their search duration was prolonged due to increased fixations and refixations, that is, repeated scanning of previously searched locations. Saccadic amplitudes were smaller in patients. Right hemianopic patients were more impaired than left hemianopic patients. The number of fixations and refixations did not differ significantly between both hemifields in the patients. Follow-up of one patient revealed changes of visual search over 18 months. By using more structured scanpaths with fewer saccades his search duration decreased. Furthermore, he developed a more efficient eye-movement strategy by making larger but less frequent saccades toward his blind side. In summary, visual-search behavior of acute hemianopic patients differs from healthy control subjects and from chronic hemianopic patients. We conclude that abnormal visual search in acute hemianopic patients is related to the brain lesion. We provide some evidence for adaptive eye-movement strategies developed over time. These adaptive strategies make the visual search more efficient and may help to compensate for the persisting visual-field loss.

  17. Visualization of oxygen distribution patterns caused by coral and algae.

    PubMed

    Haas, Andreas F; Gregg, Allison K; Smith, Jennifer E; Abieri, Maria L; Hatay, Mark; Rohwer, Forest

    2013-01-01

    Planar optodes were used to visualize oxygen distribution patterns associated with a coral reef associated green algae (Chaetomorpha sp.) and a hermatypic coral (Favia sp.) separately, as standalone organisms, and placed in close proximity mimicking coral-algal interactions. Oxygen patterns were assessed in light and dark conditions and under varying flow regimes. The images show discrete high oxygen concentration regions above the organisms during lighted periods and low oxygen in the dark. Size and orientation of these areas were dependent on flow regime. For corals and algae in close proximity the 2D optodes show areas of extremely low oxygen concentration at the interaction interfaces under both dark (18.4 ± 7.7 µmol O2 L(- 1)) and daylight (97.9 ± 27.5 µmol O2 L(- 1)) conditions. These images present the first two-dimensional visualization of oxygen gradients generated by benthic reef algae and corals under varying flow conditions and provide a 2D depiction of previously observed hypoxic zones at coral algae interfaces. This approach allows for visualization of locally confined, distinctive alterations of oxygen concentrations facilitated by benthic organisms and provides compelling evidence for hypoxic conditions at coral-algae interaction zones.

  18. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.

  19. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness. PMID:20973730

  20. Visual Search Revived: The Slopes Are Not That Slippery: A Reply to Kristjansson (2015)

    PubMed Central

    2016-01-01

    Kristjansson (2015) suggests that standard research methods in the study of visual search should be “reconsidered.” He reiterates a useful warning against treating reaction time x set size functions as simple metrics that can be used to label search tasks as “serial” or “parallel.” However, I argue that he goes too far with a broad attack on the use of slopes in the study of visual search. Used wisely, slopes do provide us with insight into the mechanisms of visual search. PMID:27433330

  1. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback.

  2. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  3. Overcoming hurdles in translating visual search research between the lab and the field.

    PubMed

    Clark, Kait; Cain, Matthew S; Adamo, Stephen H; Mitroff, Stephen R

    2012-01-01

    Research in visual search can be vital to improving performance in careers such as radiology and airport security screening. In these applied, or "field," searches, accuracy is critical, and misses are potentially fatal; however, despite the importance of performing optimally, radiological and airport security searches are nevertheless flawed. Extensive basic research in visual search has revealed cognitive mechanisms responsible for successful visual search as well as a variety of factors that tend to inhibit or improve performance. Ideally, the knowledge gained from such laboratory-based research could be directly applied to field searches, but several obstacles stand in the way of straightforward translation; the tightly controlled visual searches performed in the lab can be drastically different from field searches. For example, they can differ in terms of the nature of the stimuli, the environment in which the search is taking place, and the experience and characteristics of the searchers themselves. The goal of this chapter is to discuss these differences and how they can present hurdles to translating lab-based research to field-based searches. Specifically, most search tasks in the lab entail searching for only one target per trial, and the targets occur relatively frequently, but field searches may contain an unknown and unlimited number of targets, and the occurrence of targets can be rare. Additionally, participants in lab-based search experiments often perform under neutral conditions and have no formal training or experience in search tasks; conversely, career searchers may be influenced by the motivation to perform well or anxiety about missing a target, and they have undergone formal training and accumulated significant experience searching. This chapter discusses recent work that has investigated the impacts of these differences to determine how each factor can influence search performance. Knowledge gained from the scientific exploration of search

  4. Visual-auditory integration for visual search: a behavioral study in barn owls.

    PubMed

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target.

  5. Overlapping multivoxel patterns for two levels of visual expectation

    PubMed Central

    de Gardelle, Vincent; Stokes, Mark; Johnen, Vanessa M.; Wyart, Valentin; Summerfield, Christopher

    2013-01-01

    According to predictive accounts of perception, visual cortical regions encode sensory expectations about the external world, and the violation of those expectations by inputs (surprise). Here, using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we asked whether expectations and surprise activate the same pattern of voxels, in face-sensitive regions of the extra-striate visual cortex (the fusiform face area or FFA). Participants viewed pairs of repeating or alternating faces, with high or low probability of repetitions. As in previous studies, we found that repetition suppression (the attenuated BOLD response to repeated stimuli) in the FFA was more pronounced for probable repetitions, consistent with it reflecting reduced surprise to anticipated inputs. Secondly, we observed that repetition suppression and repetition enhancement responses were both consistent across scanner runs, suggesting that both have functional significance, with repetition enhancement possibly indicating the build up of sensory expectation. Critically, we also report that multi-voxels patterns associated with probability and repetition effects were significantly correlated within the left FFA. We argue that repetition enhancement responses and repetition probability effects can be seen as two types of expectation signals, occurring simultaneously, although at different processing levels (lower vs. higher), and different time scales (immediate vs. long term). PMID:23630488

  6. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  7. Flow pattern visualization in a mimic anaerobic digester using CFD.

    PubMed

    Vesvikar, Mehul S; Al-Dahhan, Muthanna

    2005-03-20

    Three-dimensional steady-state computational fluid dynamics (CFD) simulations were performed in mimic anaerobic digesters to visualize their flow pattern and obtain hydrodynamic parameters. The mixing in the digester was provided by sparging gas at three different flow rates. The gas phase was simulated with air and the liquid phase with water. The CFD results were first evaluated using experimental data obtained by computer automated radioactive particle tracking (CARPT). The simulation results in terms of overall flow pattern, location of circulation cells and stagnant regions, trends of liquid velocity profiles, and volume of dead zones agree reasonably well with the experimental data. CFD simulations were also performed on different digester configurations. The effects of changing draft tube size, clearance, and shape of the tank bottoms were calculated to evaluate the effect of digester design on its flow pattern. Changing the draft tube clearance and height had no influence on the flow pattern or dead regions volume. However, increasing the draft tube diameter or incorporating a conical bottom design helped in reducing the volume of the dead zones as compared to a flat-bottom digester. The simulations showed that the gas flow rate sparged by a single point (0.5 cm diameter) sparger does not have an appreciable effect on the flow pattern of the digesters at the range of gas flow rates used. PMID:15685599

  8. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  9. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  10. Visual Search Is Postponed during the Attentional Blink until the System Is Suitably Reconfigured

    ERIC Educational Resources Information Center

    Ghorashi, S. M. Shahab; Smilek, Daniel; Di Lollo, Vincent

    2007-01-01

    J. S. Joseph, M. M. Chun, and K. Nakayama (1997) found that pop-out visual search was impaired as a function of intertarget lag in an attentional blink (AB) paradigm in which the 1st target was a letter and the 2nd target was a search display. In 4 experiments, the present authors tested the implication that search efficiency should be similarly…

  11. The effects of visual search efficiency on object-based attention.

    PubMed

    Greenberg, Adam S; Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2015-07-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41-51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161-177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention.

  12. Bicycle accidents and drivers' visual search at left and right turns.

    PubMed

    Summala, H; Pasanen, E; Räsänen, M; Sievänen, J

    1996-03-01

    The accident data base of the City of Helsinki shows that when drivers cross a cycle path as they enter a non-signalized intersection, the clearly dominant type of car-cycle crashes is that in which a cyclist comes from the right and the driver is turning right, in marked contrast to the cases with drivers turning left (Pasanen 1992; City of Helsinki, Traffic Planning Department, Report L4). This study first tested an explanation that drivers turning right simply focus their attention on the cars coming from the left-those coming from the right posing no threat to them-and fail to see the cyclist from the right early enough. Drivers' scanning behavior was studied at two T-intersections. Two well-hidden video cameras were used, one to measure the head movements of the approaching drivers and the other one to measure speed and distance from the cycle crossroad. The results supported the hypothesis: the drivers turning right scanned the right leg of the T-intersection less frequently and later than those turning left. Thus, it appears that drivers develop a visual scanning strategy which concentrates on detection of more frequent and major dangers but ignores and may even mask visual information on less frequent dangers. The second part of the study evaluated different countermeasures, including speed humps, in terms of drivers' visual search behavior. The results suggested that speed-reducing countermeasures changed drivers' visual search patterns in favor of the cyclists coming from the right, presumably at least in part due to the fact that drivers were simply provided with more time to focus on each direction. PMID:8703272

  13. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  14. Electrophysiological measurement of information flow during visual search

    PubMed Central

    Cosman, Joshua D.; Arita, Jason T.; Ianni, Julianna D.; Woodman, Geoffrey F.

    2016-01-01

    The temporal relationship between different stages of cognitive processing is long-debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  15. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    PubMed Central

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  16. Immaturity of the oculomotor saccade and vergence interaction in dyslexic children: evidence from a reading and visual search study.

    PubMed

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction.

  17. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  18. Behavioural coping patterns in Parkinson's patients with visual hallucinations.

    PubMed

    Barnes, Jim; Connelly, Vince; Boubert, Laura; Maravic, Ksenija

    2013-09-01

    Visual Hallucinations are considered to affect about 20%-40% of patients with Parkinson's disease. They are generally seen as a side effect of this long-term illness and can severely affect the daily quality of life of patients. The aim of this study was to determine the coping patterns or strategies used by patients and establish whether the phenomenology and behaviours used by patients enabled control of the phenomenon. Demographic and clinical variables were recorded, including motor measures, cognitive status, and depressive symptoms. Patient with hallucinations were at a more advance stage of the disease and displayed more depressive symptoms than their non-hallucinating counterparts. Most patients used more than one constructive coping strategy, the most common were simple behavioural strategies based around motor action or cognitive approaches resulting in visual modification. In addition, humour was a common technique used by the patients to deal with the phenomenon. Emotional responses varied between patients, but it was found that the actual content of the hallucination was not directly associated with whether it caused trouble to the patient, but perceived stress was strongly correlated with the subjective disturbing nature of visual hallucinations (VHs). This study gives insight into the role of cognitive-behavioural approaches when dealing with VHs and opens up avenues for future studies in helping patient to deal with hallucinations.

  19. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  20. Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.

    PubMed

    Lundqvist, Daniel; Bruce, Neil; Öhman, Arne

    2015-01-01

    In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.

  1. Full-search-equivalent pattern matching with incremental dissimilarity approximations.

    PubMed

    Tombari, Federico; Mattoccia, Stefano; Di Stefano, Luigi

    2009-01-01

    This paper proposes a novel method for fast pattern matching based on dissimilarity functions derived from the Lp norm, such as the Sum of Squared Differences (SSD) and the Sum of Absolute Differences (SAD). The proposed method is full-search equivalent, i.e. it yields the same results as the Full Search (FS) algorithm. In order to pursue computational savings the method deploys a succession of increasingly tighter lower bounds of the adopted Lp norm-based dissimilarity function. Such bounding functions allow for establishing a hierarchy of pruning conditions aimed at skipping rapidly those candidates that cannot satisfy the matching criterion. The paper includes an experimental comparison between the proposed method and other full-search equivalent approaches known in literature, which proves the remarkable computational efficiency of our proposal. PMID:19029551

  2. Dynamic analysis and pattern visualization of forest fires.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.

  3. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  4. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    SciTech Connect

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  5. Task-Dependent Changes in Frontal-Parietal Activation and Connectivity During Visual Search.

    PubMed

    Maximo, Jose O; Neupane, Ajaya; Saxena, Nitesh; Joseph, Robert M; Kana, Rajesh K

    2016-05-01

    Visual search is an important skill in navigating and locating objects (a target) among distractors in our environment. Efficient and faster target detection involves reciprocal interaction between a viewer's attentional resources as well as salient target characteristics. The neural correlates of visual search have been extensively investigated over the last decades, suggesting the involvement of a frontal-parietal network comprising the frontal eye fields (FEFs) and intraparietal sulcus (IPS). In addition, activity and connectivity of these network changes as the visual search become complex and more demanding. The current functional magnetic resonance imaging study examined the modulation of the frontal-parietal network in response to cognitive demand in 22 healthy adult participants. In addition to brain activity, changes in functional connectivity and effective connectivity in this network were examined in response to easy and difficult visual search. Results revealed significantly increased activation in FEF, IPS, and supplementary motor area, more so in difficult search than in easy search. Functional and effective connectivity analyses showed enhanced connectivity in the frontal-parietal network during difficult search and enhanced information transfer from left to right hemisphere during the difficult search process. Our overall findings suggest that cognitive demand significantly increases brain resources across all three measures of brain processing. In sum, we found that goal-directed visual search engages a network of frontal-parietal areas that are modulated in relation to cognitive demand.

  6. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  7. Visual pattern recognition network: its training algorithm and its optoelectronic architecture

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren

    1996-07-01

    A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition.

  8. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  9. Threat modulation of visual search efficiency in PTSD: A comparison of distinct stimulus categories.

    PubMed

    Olatunji, Bunmi O; Armstrong, Thomas; Bilsky, Sarah A; Zhao, Mimi

    2015-10-30

    Although an attentional bias for threat has been implicated in posttraumatic stress disorder (PTSD), the cues that best facilitate this bias are unclear. Some studies utilize images and others utilize facial expressions that communicate threat. However, the comparability of these two types of stimuli in PTSD is unclear. The present study contrasted the effects of images and expressions with the same valence on visual search among veterans with PTSD and controls. Overall, PTSD patients had slower visual search speed than controls. Images caused greater disruption in visual search than expressions, and emotional content modulated this effect with larger differences between images and expressions arising for more negatively valenced stimuli. However, this effect was not observed with the maximum number of items in the search array. Differences in visual search speed by images and expressions significantly varied between PTSD patients and controls for only anger and at the moderate level of task difficulty. Specifically, visual search speed did not significantly differ between PTSD patients and controls when exposed to angry expressions. However, PTSD patients displayed significantly slower visual search than controls when exposed to anger images. The implications of these findings for better understanding emotion modulated attention in PTSD are discussed.

  10. Dementia alters standing postural adaptation during a visual search task in older adult men.

    PubMed

    Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G

    2015-04-23

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy.

  11. Simulated loss of foveal vision eliminates visual search advantage in repeated displays.

    PubMed

    Geringswald, Franziska; Baumgartner, Florian; Pollmann, Stefan

    2012-01-01

    In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma.

  12. Timing of speech and display affects the linguistic mediation of visual search.

    PubMed

    Chiu, Eric M; Spivey, Michael J

    2014-01-01

    Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension.

  13. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  14. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  15. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  16. Different predictors of multiple-target search accuracy between nonprofessional and professional visual searchers.

    PubMed

    Biggs, Adam T; Mitroff, Stephen R

    2014-01-01

    Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy.

  17. Naming speed and visual search deficits in readers with disabilities: evidence from an orthographically regular language (Italian).

    PubMed

    Di Filippo, Gloria; Brizzolara, Daniela; Chilosi, Anna; De Luca, Maria; Judica, Anna; Pecini, Chiara; Spinelli, Donatella; Zoccolotti, Pierluigi

    2006-01-01

    The study examined rapid automatized naming (RAN) in 42 children with reading disabilities and 101 control children-all native speakers of Italian, a language with shallow orthography. Third-, 5th- and 6th-grade children were given a RAN test that required rapid naming of color, object, or digit matrices. A visual search test using the same stimulus material (but not requiring a verbal response) and an oral articulation test were also given. Readers with disabilities performed worse than controls on the RAN test. This effect was larger in higher grades than in lower ones. Readers with disabilities were also slower than controls in performing the visual search test. The pattern of results for the RAN test held constant when the visual search performance was partialed out by covariance analysis, indicating the independence of the 2 deficits. The 2 groups did not differ for articulation rate. Finally, analysis of the pattern of intercorrelations indicated that reading speed was most clearly related to RAN, particularly in the group with reading disabilities. The results extend observations of RAN effects on reading deficits to Italian, an orthographically shallow language. PMID:17083298

  18. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  19. Pattern-reversal visual evoked potentials in phenylketonuric children.

    PubMed

    Landi, A; Ducati, A; Villani, R; Longhi, R; Riva, E; Rodocanachi, C; Giovannini, M

    1987-01-01

    Pattern-reversal visual evoked potentials (PR-VEPs) and EEG were recorded in 14 phenylketonuric (PKU) children on a low-phenylalanine (phe) diet; the data obtained were correlated with metabolic parameters, namely, the actual phe plasma level, the mean phe plasma level in the last year, an the beginning of the diet. PR-VEPs seem to be more sensitive than EEG in detecting neurophysiological derangements in these subjects; in fact PR-VEPs were pathological in six patients while EEG detected three; no significant alterations were found in the neurophysiological tests among the children with good metabolic control, and only one child was abnormal among the six on an early dietetic regimen; in contrast, six of the nine subjects presenting with high mean phe plasma levels (greater than 10 mg/100 ml) and five of the eight whose diet started after the 2nd month of life showed pathological PR-VEPs.

  20. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  1. A Study of Temporal Aspect of Posterior Parietal Cortex in Visual Search Using Transcranial Magnetic Stimulation

    NASA Astrophysics Data System (ADS)

    Ge, Sheng; Matsuoka, Akira; Ueno, Shoogo; Iramina, Keiji

    It is known that the posterior parietal cortex (PPC) plays a dominant role in spatial processing during visual search. However, the temporal aspect of the PPC is unclear. In the present study, to investigate the temporal aspects of the PPC in feature search, we applied Transcranial Magnetic Stimulation (TMS) over the right PPC with the TMS stimulus onset asynchronies (SOAs) set at 100, 150, 200 and 250 ms after visual search stimulation. We found that when SOA was set at 150 ms, compared to the sham TMS condition, there was a significant elevation in response time when TMS pulses were applied. However, there was no significant difference between the TMS and sham TMS conditions for the other SOA settings. Therefore, we suggest that the spatial processing of feature search is probably processed in the posterior parietal cortex at about 150-170 ms after visual search stimuli presentation.

  2. Visual search disorders in acute and chronic homonymous hemianopia: lesion effects and adaptive strategies.

    PubMed

    Machner, Björn; Sprenger, Andreas; Sander, Thurid; Heide, Wolfgang; Kimmig, Hubert; Helmchen, Christoph; Kömpf, Detlef

    2009-05-01

    Patients with homonymous hemianopia due to occipital brain lesions show disorders of visual search. In everyday life this leads to difficulties in reading and spatial orientation. It is a matter of debate whether these disorders are due to the brain lesion or rather reflect compensatory eye movement strategies developing over time. For the first time, eye movements of acute hemianopic patients (n= 9) were recorded during the first days following stroke while they performed an exploratory visual-search task. Compared to age-matched control subjects their search duration was prolonged due to increased fixations and refixations, that is, repeated scanning of previously searched locations. Saccadic amplitudes were smaller in patients. Right hemianopic patients were more impaired than left hemianopic patients. The number of fixations and refixations did not differ significantly between both hemifields in the patients. Follow-up of one patient revealed changes of visual search over 18 months. By using more structured scanpaths with fewer saccades his search duration decreased. Furthermore, he developed a more efficient eye-movement strategy by making larger but less frequent saccades toward his blind side. In summary, visual-search behavior of acute hemianopic patients differs from healthy control subjects and from chronic hemianopic patients. We conclude that abnormal visual search in acute hemianopic patients is related to the brain lesion. We provide some evidence for adaptive eye-movement strategies developed over time. These adaptive strategies make the visual search more efficient and may help to compensate for the persisting visual-field loss. PMID:19645941

  3. Dynamic Modulation of Local Population Activity by Rhythm Phase in Human Occipital Cortex During a Visual Search Task

    PubMed Central

    Miller, Kai J.; Hermes, Dora; Honey, Christopher J.; Sharma, Mohit; Rao, Rajesh P. N.; den Nijs, Marcel; Fetz, Eberhard E.; Sejnowski, Terrence J.; Hebb, Adam O.; Ojemann, Jeffrey G.; Makeig, Scott; Leuthardt, Eric C.

    2010-01-01

    Brain rhythms are more than just passive phenomena in visual cortex. For the first time, we show that the physiology underlying brain rhythms actively suppresses and releases cortical areas on a second-to-second basis during visual processing. Furthermore, their influence is specific at the scale of individual gyri. We quantified the interaction between broadband spectral change and brain rhythms on a second-to-second basis in electrocorticographic (ECoG) measurement of brain surface potentials in five human subjects during a visual search task. Comparison of visual search epochs with a blank screen baseline revealed changes in the raw potential, the amplitude of rhythmic activity, and in the decoupled broadband spectral amplitude. We present new methods to characterize the intensity and preferred phase of coupling between broadband power and band-limited rhythms, and to estimate the magnitude of rhythm-to-broadband modulation on a trial-by-trial basis. These tools revealed numerous coupling motifs between the phase of low-frequency (δ, θ, α, β, and γ band) rhythms and the amplitude of broadband spectral change. In the θ and β ranges, the coupling of phase to broadband change is dynamic during visual processing, decreasing in some occipital areas and increasing in others, in a gyrally specific pattern. Finally, we demonstrate that the rhythms interact with one another across frequency ranges, and across cortical sites. PMID:21119778

  4. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma.

  5. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  6. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  7. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  8. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  9. Learning by Selection: Visual Search and Object Perception in Young Infants

    ERIC Educational Resources Information Center

    Amso, Dima; Johnson, Scott P.

    2006-01-01

    The authors examined how visual selection mechanisms may relate to developing cognitive functions in infancy. Twenty-two 3-month-old infants were tested in 2 tasks on the same day: perceptual completion and visual search. In the perceptual completion task, infants were habituated to a partly occluded moving rod and subsequently presented with …

  10. The preview benefit in single-feature and conjunction search: Constraints of visual marking.

    PubMed

    Meinhardt, Günter; Persike, Malte

    2015-01-01

    Previewing distracters enhances the efficiency of visual search. Watson and Humphreys (1997) proposed that the preview benefit rests on visual marking, a mechanism which actively encodes distracter locations at preview and inhibits them afterwards at search. As Watson and Humphreys did, we used a letter-color search task to study constraints of visual marking in conjunction search and near-efficient single-feature search with single-colored and homogeneous distracter letters. Search performance was measured for fixed target and distracter features (block design) and for randomly changed features across trials (random design). In single-feature search there was a full preview benefit for both block and random designs. In conjunction search a full preview benefit was obtained only for the block design; randomly changing target and distracter features disrupted the preview benefit. However, the preview benefit was restored when the distracters were organized in spatially coherent blocks. These findings imply that the temporal segregation of old and new items is sufficient for visual marking in near-efficient single-feature search, while in conjunction search it is not. We propose a supplanting grouping principle for the preview benefit: When the new items add a new color, conjunction search is initialized and attentional resources are withdrawn from the marking mechanism. Visual marking can be restored by a second grouping principle that joins with temporal asynchrony. This principle can be either spatial or feature based. In the case of the latter, repetition priming is necessary to establish joint grouping by color and temporal asynchrony.

  11. Long-Term Memory Search across the Visual Brain

    PubMed Central

    Fedurco, Milan

    2012-01-01

    Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI) in humans together with molecular biology studies (animal models) aiming to understand how the retinal image gets transformed into so-called visual (retinotropic) maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR) and β2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF) and protein kinase PKMζ. PMID:22900206

  12. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  13. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  14. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  15. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  16. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  17. Visual Search in Learning Disabled and Hyperactive Boys.

    ERIC Educational Resources Information Center

    McIntyre, Curtis W.; And Others

    Twelve learning disabled (LD), 12 learning disabled hyperactive (LDH) and 12 hyperactive (H) boys (6-11 years old) participated in an investigation of selective attention. Ss were asked to search for a target letter embedded within an array of noise letters. Two variations were included: one involving a simultaneous search for four possible target…

  18. Visual search and line bisection in hemianopia: computational modelling of cortical compensatory mechanisms and comparison with hemineglect.

    PubMed

    Lanyon, Linda J; Barton, Jason J S

    2013-01-01

    Hemianopia patients have lost vision from the contralateral hemifield, but make behavioural adjustments to compensate for this field loss. As a result, their visual performance and behaviour contrast with those of hemineglect patients who fail to attend to objects contralateral to their lesion. These conditions differ in their ocular fixations and perceptual judgments. During visual search, hemianopic patients make more fixations in contralesional space while hemineglect patients make fewer. During line bisection, hemianopic patients fixate the contralesional line segment more and make a small contralesional bisection error, while hemineglect patients make few contralesional fixations and a larger ipsilesional bisection error. Hence, there is an attentional failure for contralesional space in hemineglect but a compensatory adaptation to attend more to the blind side in hemianopia. A challenge for models of visual attentional processes is to show how compensation is achieved in hemianopia, and why such processes are hindered or inaccessible in hemineglect. We used a neurophysiology-derived computational model to examine possible cortical compensatory processes in simulated hemianopia from a V1 lesion and compared results with those obtained with the same processes under conditions of simulated hemineglect from a parietal lesion. A spatial compensatory bias to increase attention contralesionally replicated hemianopic scanning patterns during visual search but not during line bisection. To reproduce the latter required a second process, an extrastriate lateral connectivity facilitating form completion into the blind field: this allowed accurate placement of fixations on contralesional stimuli and reproduced fixation patterns and the contralesional bisection error of hemianopia. Neither of these two cortical compensatory processes was effective in ameliorating the ipsilesional bias in the hemineglect model. Our results replicate normal and pathological patterns of

  19. Patterns of non-embolic transient monocular visual field loss.

    PubMed

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  20. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  1. Visual search improvement in hemianopic patients after audio-visual stimulation.

    PubMed

    Bolognini, Nadia; Rasi, Fabrizio; Coccia, Michela; Làdavas, Elisabetta

    2005-12-01

    One of the most effective techniques in the rehabilitation of visual field defects is based on implementation of oculomotor strategies to compensate for visual field loss. In the present study we develop a new rehabilitation approach based on the audio-visual stimulation of the visual field. Since it has been demonstrated that audio-visual interaction in multisensory neurons can improve temporally visual perception in patients with hemianopia, the aim of the present study was to verify whether a systematic audio-visual stimulation might induce a long-lasting amelioration of visual field disorders. Eight patients with chronic visual field defects were trained to detect the presence of visual targets. During the training, the visual stimulus could be presented alone, i.e. unimodal condition, or together with an acoustic stimulus, i.e. crossmodal conditions. In the crossmodal conditions, the spatial disparity between the visual and the acoustic stimuli were systematically varied (0, 16 and 32 degrees of disparity). Furthermore, the temporal interval between the acoustic stimulus and the visual target in the crossmodal conditions was gradually reduced from 500 to 0 ms. Patients underwent the treatment for 4 h daily, over a period of nearly 2 weeks. The results showed a progressive improvement of visual detections during the training and an improvement of visual oculomotor exploration that allowed patients to efficiently compensate for the loss of vision. More interesting, there was a transfer of treatment gains to functional measures assessing visual field exploration and to daily-life activities, which was found stable at the 1 month follow-up control session. These findings are very promising with respect to the possibility of taking advantage of human multisensory capabilities to recover from unimodal sensory impairments.

  2. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record PMID:27295466

  3. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record

  4. Activation of new attentional templates for real-world objects in visual search.

    PubMed

    Nako, Rebecca; Smith, Tim J; Eimer, Martin

    2015-05-01

    Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations.

  5. Effects of targets embedded within words in a visual search task

    PubMed Central

    Grabbe, Jeremy W.

    2014-01-01

    Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands). PMID:24855497

  6. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects.

  7. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  8. Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-09-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.

  9. Common Visual Pattern Discovery via Nonlinear Mean Shift Clustering.

    PubMed

    Wang, Linbo; Tang, Dong; Guo, Yanwen; Do, Minh N

    2015-12-01

    Discovering common visual patterns (CVPs) from two images is a challenging task due to the geometric and photometric deformations as well as noises and clutters. The problem is generally boiled down to recovering correspondences of local invariant features, and the conventionally addressed by graph-based quadratic optimization approaches, which often suffer from high computational cost. In this paper, we propose an efficient approach by viewing the problem from a novel perspective. In particular, we consider each CVP as a common object in two images with a group of coherently deformed local regions. A geometric space with matrix Lie group structure is constructed by stacking up transformations estimated from initially appearance-matched local interest region pairs. This is followed by a mean shift clustering stage to group together those close transformations in the space. Joining regions associated with transformations of the same group together within each input image forms two large regions sharing similar geometric configuration, which naturally leads to a CVP. To account for the non-Euclidean nature of the matrix Lie group, mean shift vectors are derived in the corresponding Lie algebra vector space with a newly provided effective distance measure. Extensive experiments on single and multiple common object discovery tasks as well as near-duplicate image retrieval verify the robustness and efficiency of the proposed approach. PMID:26415176

  10. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.

  11. Spacing affects some but not all visual searches: implications for theories of attention and crowding.

    PubMed

    Reddy, Lavanya; VanRullen, Rufin

    2007-02-02

    We investigated the effect of varying interstimulus spacing on an upright among inverted face search and a red-green among green-red bisected disk search. Both tasks are classic examples of serial search; however, spacing affects them very differently: As spacing increased, face discrimination performance improved significantly, whereas performance on the bisected disks remained poor. (No effect of spacing was observed for either a red among green or an L among + search tasks, two classic examples of parallel search.) In a second experiment, we precued the target location so that attention was no longer a limiting factor: Both serial search tasks were now equally affected by spacing, a result we attribute to a more classical form of crowding. The observed spacing effect in visual search suggests that for certain tasks, serial search may result from local neuronal competition between target and distractors, soliciting attentional resources; in other cases, serial search must occur for another reason, for example, because an item-by-item, attention-mediated recognition must take place. We speculate that this distinction may be based on whether or not there exist neuronal populations tuned to the relevant target-distractor distinction, and we discuss the possible relations between this spacing effect in visual search and other forms of crowding.

  12. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills. PMID:23460295

  13. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  14. Acute exercise and aerobic fitness influence selective attention during visual search.

    PubMed

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.

  15. Effect of stimulus contrast on performance and eye movements in visual search.

    PubMed

    Näsänen, R; Ojanpää, H; Kojo, I

    2001-06-01

    According to the visual span control hypothesis, eye movements are controlled in relation to the size of visual span. In reading, the decrease of contrast reduces visual span, saccade sizes, and reading speed. The purpose of the present study is to determine how stimulus contrast affects the speed of two-dimensional visual search and how changes in eye movements and visual span could explain changes in performance. The task of the observer was to search for, and identify, an uppercase letter from a rectangular array of characters in which the other items were numerals. Threshold search time, i.e. the duration of stimulus presentation required for search that is successful with a given probability, was determined by using a multiple-alternative staircase method. Eye movements were recorded simultaneously by using a video eye tracker. Four different set sizes (the sizes of stimulus array) (3x3-10x10), and five different contrasts (0.0186-0.412) were used. At all set sizes, threshold search time decreased with increasing contrast. Also the average number of fixations per search decreased with increasing contrast. At the smallest set size (3x3), only one fixation was needed except at the lowest contrast. Average fixation duration decreased and saccade amplitudes increased slightly with increasing contrast. The reduction of the number of fixations with increasing contrast suggests that visual span, i.e. the area from which information can be collected at one fixation, increases with increasing contrast. The reduction of the number of fixations together with reduced fixation duration result in reduced search times when contrast increases.

  16. Attention to quantitative and configural properties of abstract visual patterns by children and adults.

    PubMed

    Mendelson, M J

    1984-10-01

    Students in grades 2, 4, 6, and college sorted abstract visual patterns that varied both in amount of contour and in the type of visual organization (unstructured, simple symmetries, multiple symmetries, and rotational organization). The subjects were told to put the patterns into rows so that all the patterns in a row were "alike in some way," with no limits placed either on the number of rows or on the number of patterns in a row. Second graders sorted mainly on the basis of amount of contour and less so with reference to multiple types of visual organization. Fourth and sixth graders used contour as a sorting criterion less than second graders; moreover, they sorted with reference to all types of visual structure. As a group, college students sorted exclusively on the basis of structure. The data were taken as evidence that children attend to both amount of contour and visual organization, but that attention to visual structure increases with age. PMID:6510055

  17. The Nature and Process of Development in Averaged Visually Evoked Potentials: Discussion on Pattern Structure.

    ERIC Educational Resources Information Center

    Izawa, Shuji; Mizutani, Tohru

    This paper examines the development of visually evoked EEG patterns in retarded and normal subjects. The paper focuses on the averaged visually evoked potentials (AVEP) in the central and occipital regions of the brain in eyes closed and eyes open conditions. Wave pattern, amplitude, and latency are examined. The first section of the paper reviews…

  18. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search.

    PubMed

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  19. When is it time to move to the next map? Optimal foraging in guided visual search.

    PubMed

    Ehinger, Krista A; Wolfe, Jeremy M

    2016-10-01

    Suppose that you are looking for visual targets in a set of images, each containing an unknown number of targets. How do you perform that search, and how do you decide when to move from the current image to the next? Optimal foraging theory predicts that foragers should leave the current image when the expected value from staying falls below the expected value from leaving. Here, we describe how to apply these models to more complex tasks, like search for objects in natural scenes where people have prior beliefs about the number and locations of targets in each image, and search is guided by target features and scene context. We model these factors in a guided search task and predict the optimal time to quit search. The data come from a satellite image search task. Participants searched for small gas stations in large satellite images. We model quitting times with a Bayesian model that incorporates prior beliefs about the number of targets in each map, average search efficiency (guidance), and actual search history in the image. Clicks deploying local magnification were used as surrogates for deployments of attention and, thus, for time. Leaving times (measured in mouse clicks) were well-predicted by the model. People terminated search when their expected rate of target collection fell to the average rate for the task. Apparently, people follow a rate-optimizing strategy in this task and use both their prior knowledge and search history in the image to decide when to quit searching.

  20. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search.

    PubMed

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences.

  1. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search

    PubMed Central

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I.

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  2. The Mechanisms Underlying the ASD Advantage in Visual Search

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik

    2016-01-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in "Neuron" 48:497-507, 2005; Simmons et al. in "Vis Res" 49:2705-2739, 2009). This…

  3. Rare, but obviously there: effects of target frequency and salience on visual search accuracy.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Mitroff, Stephen R

    2014-10-01

    Accuracy can be extremely important for many visual search tasks. However, numerous factors work to undermine successful search. Several negative influences on search have been well studied, yet one potentially influential factor has gone almost entirely unexplored-namely, how is search performance affected by the likelihood that a specific target might appear? A recent study demonstrated that when specific targets appear infrequently (i.e., once in every thousand trials) they were, on average, not often found. Even so, some infrequently appearing targets were actually found quite often, suggesting that the targets' frequency is not the only factor at play. Here, we investigated whether salience (i.e., the extent to which an item stands out during search) could explain why some infrequent targets are easily found whereas others are almost never found. Using the mobile application Airport Scanner, we assessed how individual target frequency and salience interacted in a visual search task that included a wide array of targets and millions of trials. Target frequency and salience were both significant predictors of search accuracy, although target frequency explained more of the accuracy variance. Further, when examining only the rarest target items (those that appeared on less than 0.15% of all trials), there was a significant relationship between salience and accuracy such that less salient items were less likely to be found. Beyond implications for search theory, these data suggest significant vulnerability for real-world searches that involve targets that are both infrequent and hard-to-spot.

  4. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

  5. Searching the Visual Arts: An Analysis of Online Information Access.

    ERIC Educational Resources Information Center

    Brady, Darlene; Serban, William

    1981-01-01

    A search for stained glass bibliographic information using DIALINDEX identified 57 DIALOG files from a variety of subject categories and 646 citations as relevant. Files include applied science, biological sciences, chemistry, engineering, environment/pollution, people, business research, and public affairs. Eleven figures illustrate the search…

  6. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study.

    PubMed

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-01-01

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in "pop-out" or "search" condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the "search" condition, while PPC is mainly involved in detecting "pop-out" targets. PMID:27452715

  7. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study.

    PubMed

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-07-25

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in "pop-out" or "search" condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the "search" condition, while PPC is mainly involved in detecting "pop-out" targets.

  8. Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm.

    PubMed

    Attar, Nada; Schneps, Matthew H; Pomplun, Marc

    2016-10-01

    An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.

  9. Fruitful visual search: inhibition of return in a virtual foraging task.

    PubMed

    Thomas, Laura E; Ambinder, Michael S; Hsieh, Brendon; Levinthal, Brian; Crowell, James A; Irwin, David E; Kramer, Arthur F; Lleras, Alejandro; Simons, Daniel J; Wang, Ranxiao Frances

    2006-10-01

    Inhibition of return (IOR) has long been viewed as a foraging facilitator in visual search. We investigated the contribution of IOR in a task that approximates natural foraging more closely than typical visual search tasks. Participants in a fully immersive virtual reality environment manually searched an array of leaves for a hidden piece of fruit, using a wand to select and examine each leaf location. Search was slower than in typical IOR paradigms, taking seconds instead of a few hundred milliseconds. Participants also made a speeded response when they detected a flashing leaf that either was or was not in a previously searched location. Responses were slower when the flashing leaf was in a previously searched location than when it was in an unvisited location. These results generalize IOR to an approximation of a naturalistic visual search setting and support the hypothesis that IOR can facilitate foraging. The experiment also constitutes the first use of a fully immersive virtual reality display in the study of IOR. PMID:17328391

  10. Learning from demonstrations: the role of visual search during observational learning from video and point-light models.

    PubMed

    Horn, Robert R; Williams, A Mark; Scott, Mark A

    2002-03-01

    In this study, we examined the visual search strategies used during observation of video and point-light display models. We also assessed the relative effectiveness of video and point-light models in facilitating the learning of task outcomes and movement patterns. Twenty-one female novice soccer players were divided equally into video, point-light display and no-model (control) groups. Participants chipped a soccer ball onto a target area from which radial and variable error scores were taken. Kinematic data were also recorded using an opto-electrical system. Both a pre- and post-test were performed, interspersed with three periods of acquisition and observation of the model. A retention test was completed 2 days after the post-test. There was a significant main effect for test period for outcome accuracy and variability, but observation of a model did not facilitate outcome-based learning. Participants observing the models acquired a global movement pattern that was closer to that of the model than the controls, although they did not acquire the local relations in the movement pattern, evidenced by joint range of motion and angle-angle plots. There were no significant differences in learning between the point-light display and video groups. The point-light display model group used a more selective visual search pattern than the video model group, while both groups became more selective with successive trials and observation periods. The results are discussed in the context of Newell's hierarchy of coordination and control and Scully and Newell's visual perception perspective.

  11. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  12. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  13. Abnormal early brain responses during visual search are evident in schizophrenia but not bipolar affective disorder.

    PubMed

    VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R

    2016-01-01

    People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia.

  14. How You Use It Matters: Object Function Guides Attention During Visual Search in Scenes.

    PubMed

    Castelhano, Monica S; Witherspoon, Richelle L

    2016-05-01

    How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an object's function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects' functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology.

  15. Effect of previously fixated locations on saccade trajectory during free visual search.

    PubMed

    Sogo, Hiroyuki; Takeda, Yuji

    2006-10-01

    Recent studies have shown that the saccade trajectory often curved away from an object that was previously attended but irrelevant to the current saccade goal. We investigated whether such curved saccades occur during serial visual search, which requires sequential saccades possibly controlled by inhibition to multiple locations. The results show that the saccade trajectories were affected by at least three previous fixations. Furthermore, the effect of the previous fixations on saccade trajectories decreased exponentially with time or the number of intervening saccades. The relationship between the curved saccade trajectory and inhibition of return during serial visual search was discussed.

  16. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  17. Quantifying peripheral and foveal perceived differences in natural image patches to predict visual search performance.

    PubMed

    Hughes, Anna E; Southwell, Rosy V; Gilchrist, Iain D; Tolhurst, David J

    2016-08-01

    Duncan and Humphreys (1989) identified two key factors that affected performance in a visual search task for a target among distractors. The first was the similarity of the target to distractors (TD), and the second was the similarity of distractors to each other (DD). Here we investigate if it is the perceived similarity in foveal or peripheral vision that determines performance. We studied search using stimuli made from patches cut from colored images of natural objects; differences between targets and their modified distractors were estimated using a ratings task peripherally and foveally. We used search conditions in which the targets and distractors were easy to distinguish both foveally and peripherally ("high" stimuli), in which they were difficult to distinguish both foveally and peripherally ("low"), and in which they were easy to distinguish foveally but difficult to distinguish peripherally ("metamers"). In the critical metameric condition, search slopes (change of search time with number of distractors) were similar to the "low" condition, indicating a key role for peripheral information in visual search as both conditions have low perceived similarity peripherally. Furthermore, in all conditions, search slope was well described quantitatively from peripheral TD and DD but not foveal. However, some features of search, such as error rates, do indicate roles for foveal vision too.

  18. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  19. Quantifying peripheral and foveal perceived differences in natural image patches to predict visual search performance.

    PubMed

    Hughes, Anna E; Southwell, Rosy V; Gilchrist, Iain D; Tolhurst, David J

    2016-08-01

    Duncan and Humphreys (1989) identified two key factors that affected performance in a visual search task for a target among distractors. The first was the similarity of the target to distractors (TD), and the second was the similarity of distractors to each other (DD). Here we investigate if it is the perceived similarity in foveal or peripheral vision that determines performance. We studied search using stimuli made from patches cut from colored images of natural objects; differences between targets and their modified distractors were estimated using a ratings task peripherally and foveally. We used search conditions in which the targets and distractors were easy to distinguish both foveally and peripherally ("high" stimuli), in which they were difficult to distinguish both foveally and peripherally ("low"), and in which they were easy to distinguish foveally but difficult to distinguish peripherally ("metamers"). In the critical metameric condition, search slopes (change of search time with number of distractors) were similar to the "low" condition, indicating a key role for peripheral information in visual search as both conditions have low perceived similarity peripherally. Furthermore, in all conditions, search slope was well described quantitatively from peripheral TD and DD but not foveal. However, some features of search, such as error rates, do indicate roles for foveal vision too. PMID:27565015

  20. Faster than the speed of rejection: Object identification processes during visual search for multiple targets

    PubMed Central

    Godwin, Hayward J.; Walenchok, Stephen C.; Houpt, Joseph W.; Hout, Michael C.; Goldinger, Stephen D.

    2015-01-01

    When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this “dual-target cost” has primarily focused on the breakdown of attention guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, in order to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally. PMID:25938253

  1. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers. PMID:25678271

  2. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.

  3. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma

    PubMed Central

    Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha

    2014-01-01

    AIM To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). METHODS Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). RESULTS The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. CONCLUSION Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished. PMID:24790879

  4. Memory for found targets interferes with subsequent performance in multiple-target visual search.

    PubMed

    Cain, Matthew S; Mitroff, Stephen R

    2013-10-01

    Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience.

  5. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  6. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) in the early stages of Alzheimer's disease.

    PubMed

    Krasodomska, Kamila; Lubiński, Wojciech; Potemkowski, Andrzej; Honczarenko, Krystyna

    2010-10-01

    Alzheimer's disease (AD) is one of the most common causes of dementia in the world. Patients with AD frequently complain of vision disturbances that do not manifest as changes in routine ophthalmological examination findings. The main causes of these disturbances are neuropathological changes in the visual cortex, although abnormalities in the retina and optic nerve cannot be excluded. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) tests are commonly used in ophthalmology to estimate bioelectrical function of the retina and optic nerve. The aim of this study was to determine whether retinal and optic nerve function, measured by PERG and PVEP tests, is changed in individuals in the early stages of AD with normal routine ophthalmological examination results. Standard PERG and PVEP tests were performed in 30 eyes of 30 patients with the early stages of AD. The results were compared to 30 eyes of 30 normal healthy controls. PERG and PVEP tests were recorded in accordance with the International Society for Clinical Electrophysiology of Vision (ISCEV) standards. Additionally, neural conduction was measured using retinocortical time (RCT)--the difference between P100-wave latency in PVEP and P50-wave implicit time in PERG. In PERG test, PVEP test, and RCT, statistically significant changes were detected. In PERG examination, increased implicit time of P50-wave (P < 0.03) and amplitudes reductions in P50- and N95-waves (P < 0.0001) were observed. In PVEP examination, increased latency of P100-wave (P < 0.0001) was found. A significant increase in RCT (P < 0.0001) was observed. The most prevalent features were amplitude reduction in N95-wave and increased latency of P100-wave which were seen in 56.7% (17/30) of the AD eyes. In patients with the early stages of AD and normal routine ophthalmological examination results, dysfunction of the retinal ganglion cells as well as of the optic nerve is present, as detected by PERG and PVEP tests. These

  7. The right hemisphere is dominant in organization of visual search-A study in stroke patients.

    PubMed

    Ten Brink, Antonia F; Matthijs Biesbroek, J; Kuijf, Hugo J; Van der Stigchel, Stefan; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2016-05-01

    Cancellation tasks are widely used for diagnosis of lateralized attentional deficits in stroke patients. A disorganized fashion of target cancellation has been hypothesized to reflect disturbed spatial exploration. In the current study we aimed to examine which lesion locations result in disorganized visual search during cancellation tasks, in order to determine which brain areas are involved in search organization. A computerized shape cancellation task was administered in 78 stroke patients. As an index for search organization, the amount of intersections of paths between consecutive crossed targets was computed (i.e., intersections rate). This measure is known to accurately depict disorganized visual search in a stroke population. Ischemic lesions were delineated on CT or MRI images. Assumption-free voxel-based lesion-symptom mapping and region of interest-based analyses were used to determine the grey and white matter anatomical correlates of the intersections rate as a continuous measure. The right lateral occipital cortex, superior parietal lobule, postcentral gyrus, superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior longitudinal fasciculus, first branch of the superior longitudinal fasciculus (SLF I), and the inferior fronto-occipital fasciculus, were related to search organization. To conclude, a clear right hemispheric dominance for search organization was revealed. Further, the correlates of disorganized search overlap with regions that have previously been associated with conjunctive search and spatial working memory. This suggests that disorganized visual search is caused by disturbed spatial processes, rather than deficits in high level executive function or planning, which would be expected to be more related to frontal regions. PMID:26876010

  8. How do magnitude and frequency of monetary reward guide visual search?

    PubMed

    Won, Bo-Yeong; Leber, Andrew B

    2016-07-01

    How does reward guide spatial attention during visual search? In the present study, we examine whether and how two types of reward information-magnitude and frequency-guide search behavior. Observers were asked to find a target among distractors in a search display to earn points. We manipulated multiple levels of value across the search display quadrants in two ways: For reward magnitude, targets appeared equally often in each quadrant, and the value of each quadrant was determined by the average points earned per target; for reward frequency, we varied how often the target appeared in each quadrant but held the average points earned per target constant across the quadrants. In Experiment 1, we found that observers were highly sensitive to the reward frequency information, and prioritized their search accordingly, whereas we did not find much prioritization based on magnitude information. In Experiment 2, we found that magnitude information for a nonspatial feature (color) could bias search performance, showing that the relative insensitivity to magnitude information during visual search is not generalized across all types of information. In Experiment 3, we replicated the negligible use of spatial magnitude information even when we used limited-exposure displays to incentivize the expression of learning. In Experiment 4, we found participants used the spatial magnitude information during a modified choice task-but again not during search. Taken together, these findings suggest that the visual search apparatus does not equally exploit all potential sources of spatial value information; instead, it favors spatial reward frequency information over spatial reward magnitude information. PMID:27270595

  9. The right hemisphere is dominant in organization of visual search-A study in stroke patients.

    PubMed

    Ten Brink, Antonia F; Matthijs Biesbroek, J; Kuijf, Hugo J; Van der Stigchel, Stefan; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2016-05-01

    Cancellation tasks are widely used for diagnosis of lateralized attentional deficits in stroke patients. A disorganized fashion of target cancellation has been hypothesized to reflect disturbed spatial exploration. In the current study we aimed to examine which lesion locations result in disorganized visual search during cancellation tasks, in order to determine which brain areas are involved in search organization. A computerized shape cancellation task was administered in 78 stroke patients. As an index for search organization, the amount of intersections of paths between consecutive crossed targets was computed (i.e., intersections rate). This measure is known to accurately depict disorganized visual search in a stroke population. Ischemic lesions were delineated on CT or MRI images. Assumption-free voxel-based lesion-symptom mapping and region of interest-based analyses were used to determine the grey and white matter anatomical correlates of the intersections rate as a continuous measure. The right lateral occipital cortex, superior parietal lobule, postcentral gyrus, superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior longitudinal fasciculus, first branch of the superior longitudinal fasciculus (SLF I), and the inferior fronto-occipital fasciculus, were related to search organization. To conclude, a clear right hemispheric dominance for search organization was revealed. Further, the correlates of disorganized search overlap with regions that have previously been associated with conjunctive search and spatial working memory. This suggests that disorganized visual search is caused by disturbed spatial processes, rather than deficits in high level executive function or planning, which would be expected to be more related to frontal regions.

  10. How do magnitude and frequency of monetary reward guide visual search?

    PubMed

    Won, Bo-Yeong; Leber, Andrew B

    2016-07-01

    How does reward guide spatial attention during visual search? In the present study, we examine whether and how two types of reward information-magnitude and frequency-guide search behavior. Observers were asked to find a target among distractors in a search display to earn points. We manipulated multiple levels of value across the search display quadrants in two ways: For reward magnitude, targets appeared equally often in each quadrant, and the value of each quadrant was determined by the average points earned per target; for reward frequency, we varied how often the target appeared in each quadrant but held the average points earned per target constant across the quadrants. In Experiment 1, we found that observers were highly sensitive to the reward frequency information, and prioritized their search accordingly, whereas we did not find much prioritization based on magnitude information. In Experiment 2, we found that magnitude information for a nonspatial feature (color) could bias search performance, showing that the relative insensitivity to magnitude information during visual search is not generalized across all types of information. In Experiment 3, we replicated the negligible use of spatial magnitude information even when we used limited-exposure displays to incentivize the expression of learning. In Experiment 4, we found participants used the spatial magnitude information during a modified choice task-but again not during search. Taken together, these findings suggest that the visual search apparatus does not equally exploit all potential sources of spatial value information; instead, it favors spatial reward frequency information over spatial reward magnitude information.

  11. Binocular saccade coordination in reading and visual search: a developmental study in typical reader and dyslexic children

    PubMed Central

    Seassau, Magali; Gérard, Christophe Loic; Bui-Quoc, Emmanuel; Bucci, Maria Pia

    2014-01-01

    Studies dealing with developmental aspects of binocular eye movement behavior during reading are scarce. In this study we have explored binocular strategies during reading and visual search tasks in a large population of dyslexic and typical readers. Binocular eye movements were recorded using a video-oculography system in 43 dyslexic children (aged 8–13) and in a group of 42 age-matched typical readers. The main findings are: (i) ocular motor characteristics of dyslexic children are impaired in comparison to those reported in typical children in reading task; (ii) a developmental effect exists in reading in control children, in dyslexic children the effect of development was observed only on fixation durations; and (iii) ocular motor behavior in the visual search tasks is similar for dyslexic children and for typical readers, except for the disconjugacy during and after the saccade: dyslexic children are impaired in comparison to typical children. Data reported here confirms and expands previous studies on children’s reading. Both reading skills and binocular saccades coordination improve with age in typical readers. The atypical eye movement’s patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an impairment of the ocular motor saccade and vergence systems interaction. PMID:25400559

  12. Decision processes in visual search as a function of target prevalence.

    PubMed

    Peltier, Chad; Becker, Mark W

    2016-09-01

    The probability of missing a target increases in low target prevalence search tasks. Wolfe and Van Wert (2010) propose 2 causes of this effect: reducing the quitting threshold, and conservatively shifting the decision making criterion used to evaluate each item. Reducing the quitting threshold predicts that target absent responses will be made without fully inspecting the display, increasing misses due to never inspecting the target (selection errors). The shift in decision criterion increases the likelihood of failing to recognize an inspected target (identification errors). Though there is robust evidence that target prevalence rates shift quitting thresholds, the proposed shift in decision making criterion has little support. In Experiment 1 we eye-tracked participants during searches of high, medium, and low prevalence. Eye movements were used to classify misses as selection or identification errors. Identification errors increased as prevalence decreased, supporting the claim that decision criterion becomes more conservative as prevalence decreases. In addition, as prevalence decreased, the dwell time on targets increased while dwell times on distractors decreased. We propose that the effect of prevalence on decision making for individual items is best modeled as a shift in criterion in a drift diffusion model, rather than signal detection, as drift diffusion accounts for this pattern of decision times. In Experiment 2 we replicate these findings while presenting stimuli in an rapid serial visual presentation (RSVP) stream. Experiments 1 and 2 were consistent with the conclusion that prevalence rate influences the item-by-item decision criterion, and are consistent with a drift diffusion model of this decision process. (PsycINFO Database Record PMID:27149294

  13. Assessing the benefits of stereoscopic displays to visual search: methodology and initial findings

    NASA Astrophysics Data System (ADS)

    Godwin, Hayward J.; Holliman, Nick S.; Menneer, Tamaryn; Liversedge, Simon P.; Cave, Kyle R.; Donnelly, Nicholas

    2015-03-01

    Visual search is a task that is carried out in a number of important security and health related scenarios (e.g., X-ray baggage screening, radiography). With recent and ongoing developments in the technology available to present images to observers in stereoscopic depth, there has been increasing interest in assessing whether depth information can be used in complex search tasks to improve search performance. Here we outline the methodology that we developed, along with both software and hardware information, in order to assess visual search performance in complex, overlapping stimuli that also contained depth information. In doing so, our goal is to foster further research along these lines in the future. We also provide an overview with initial results of the experiments that we have conducted involving participants searching stimuli that contain overlapping objects presented on different depth planes to one another. Thus far, we have found that depth information does improve the speed (but not accuracy) of search, but only when the stimuli are highly complex and contain a significant degree of overlap. Depth information may therefore aid real-world search tasks that involve the examination of complex, overlapping stimuli.

  14. Earthdata Search: Methods for Improving Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.; Crouch, M.; Siarto, J.; Sun, B.

    2015-12-01

    In a landscape of heterogeneous data from diverse sources and disciplines, producing useful tools poses a significant challenge. NASA's Earthdata Search application tackles this challenge, enabling discovery and inter-comparison of data across the wide array of scientific disciplines that use NASA Earth observation data. During this talk, we will give a brief overview of the application, and then share our approach for understanding and satisfying the needs of users from several disparate scientific communities. Our approach involves: - Gathering fine-grained metrics to understand user behavior - Using metrics to quantify user success - Combining metrics, feedback, and user research to understand user needs - Applying professional design toward addressing user needs - Using metrics and A/B testing to evaluate the viability of changes - Providing enhanced features for services to promote adoption - Encouraging good metadata quality and soliciting feedback for metadata issues - Open sourcing the application and its components to allow it to serve more users

  15. The Development of Visual Search in Infancy: Attention to Faces versus Salience

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2016-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and…

  16. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  17. Visual Search for Object Orientation Can Be Modulated by Canonical Orientation

    ERIC Educational Resources Information Center

    Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian

    2005-01-01

    The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…

  18. Eye Movement and Visual Search: Are There Elementary Abnormalities in Autism?

    ERIC Educational Resources Information Center

    Brenner, Laurie A.; Turner, Katherine C.; Muller, Ralph-Axel

    2007-01-01

    Although atypical eye gaze is commonly observed in autism, little is known about underlying oculomotor abnormalities. Our review of visual search and oculomotor systems in the healthy brain suggests that relevant networks may be partially impaired in autism, given regional abnormalities known from neuroimaging. However, direct oculomotor evidence…

  19. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  20. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  1. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    PubMed

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  2. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  3. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    PubMed

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  4. Implicit short- and long-term memory direct our gaze in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-04-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.

  5. How You Move Is What You See: Action Planning Biases Selection in Visual Search

    ERIC Educational Resources Information Center

    Wykowska, Agnieszka; Schubo, Anna; Hommel, Bernhard

    2009-01-01

    Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection…

  6. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  7. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes.

  8. Earthdata Search: Combining New Services and Technologies for Earth Science Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.

    2014-12-01

    A host of new services are revolutionizing discovery, visualization, and access of NASA's Earth science data holdings. At the same time, web browsers have become far more capable and open source libraries have grown to take advantage of these capabilities. Earthdata Search is a web application which combines modern browser features with the latest Earthdata services from NASA to produce a cutting-edge search and access client with features far beyond what was possible only a couple of years ago. Earthdata Search provides data discovery through the Common Metadata Repository (CMR), which provides a high-speed REST API for searching across hundreds of millions of data granules using temporal, spatial, and other constraints. It produces data visualizations by combining CMR data with Global Imagery Browse Services (GIBS) image tiles. Earthdata Search renders its visualizations using custom plugins built on Leaflet.js, a lightweight mobile-friendly open source web mapping library. The client further features an SVG-based interactive timeline view of search results. For data access, Earthdata Search provides easy temporal and spatial subsetting as well as format conversion by making use of OPeNDAP. While the client hopes to drive adoption of these services and standards, it provides fallback behavior for working with data that has not yet adopted them. This allows the client to remain on the cutting-edge of service offerings while still boasting a catalog containing thousands of data collections. In this session, we will walk through Earthdata Search and explain how it incorporates these new technologies and service offerings.

  9. Temporal Binding and Segmentation in Visual Search: A Computational Neuroscience Analysis.

    PubMed

    Mavritsaki, Eirini; Humphreys, Glyn

    2016-10-01

    Human visual search operates not only over space but also over time, as old items remain in the visual field and new items appear. Preview search (where one set of distractors appears before the onset of a second set) has been used as a paradigm to study search over time and space [Watson, D. G., & Humphreys, G. W. Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review, 104, 90-122, 1997], with participants showing efficient search when old distractors can be ignored and new targets prioritized. The benefits of preview search are lost, however, if a temporal gap is introduced between a first presentation of the old items and the re-presentation of all the items in the search display [Kunar, M. A., Humphreys, G. W., & Smith, K. J. History matters: The preview benefit in search is not onset capture. Psychological Science, 14, 181-185, 2003a], consistent with the old items being bound by temporal onset to the new stimuli. This effect of temporal binding can be eliminated if the old items reappear briefly before the new items, indicating also a role for the memory of the old items. Here we simulate these effects of temporal coding in search using the spiking search over time and space model [Mavritsaki, E., Heinke, D., Allen, H., Deco, G., & Humphreys, G. W. Bridging the gap between physiology and behavior: Evidence from the sSoTS model of human visual attention. Psychological Review, 118, 3-41, 2011]. We show that a form of temporal binding by new onsets has to be introduced to the model to simulate the effects of a temporal gap, but that effects of the memory of the old item can stem from continued neural suppression across a temporal gap. We also show that the model can capture the effects of brain lesion on preview search under the different temporal conditions. The study provides a proof-of-principle analysis that neural suppression and temporal binding can be sufficient to account for human

  10. Differential roles of the dorsal prefrontal and posterior parietal cortices in visual search: a TMS study

    PubMed Central

    Yan, Yulong; Wei, Rizhen; Zhang, Qian; Jin, Zhenlan; Li, Ling

    2016-01-01

    Although previous studies have shown that fronto-parietal attentional networks play a crucial role in bottom-up and top-down processes, the relative contribution of the frontal and parietal cortices to these processes remains elusive. Here we used transcranial magnetic stimulation (TMS) to interfere with the activity of the right dorsal prefrontal cortex (DLPFC) or the right posterior parietal cortex (PPC), immediately prior to the onset of the visual search display. Participants searched a target defined by color and orientation in “pop-out” or “search” condition. Repetitive TMS was applied to either the right DLPFC or the right PPC on different days. Performance was evaluated at baseline (no TMS), during TMS, and after TMS (Post-session). RTs were prolonged when TMS was applied over the DLPFC in the search, but not in the pop-out condition, relative to the baseline session. In comparison, TMS over the PPC prolonged RTs in the pop-out condition, and when the target appeared in the left visual field for the search condition. Taken together these findings provide evidence for a differential role of DLPFC and PPC in the visual search, indicating that DLPFC has a specific involvement in the “search” condition, while PPC is mainly involved in detecting “pop-out” targets. PMID:27452715

  11. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations. PMID:26872100

  12. Cortical dynamics of contextually cued attentive visual learning and search: spatial and object evidence accumulation.

    PubMed

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-10-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.

  13. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  14. EPS Mid-Career Award 2014. The control of attention in visual search: Cognitive and neural mechanisms.

    PubMed

    Eimer, Martin

    2015-01-01

    In visual search, observers try to find known target objects among distractors in visual scenes where the location of the targets is uncertain. This review article discusses the attentional processes that are active during search and their neural basis. Four successive phases of visual search are described. During the initial preparatory phase, a representation of the current search goal is activated. Once visual input has arrived, information about the presence of target-matching features is accumulated in parallel across the visual field (guidance). This information is then used to allocate spatial attention to particular objects (selection), before representations of selected objects are activated in visual working memory (recognition). These four phases of attentional control in visual search are characterized both at the cognitive level and at the neural implementation level. It will become clear that search is a continuous process that unfolds in real time. Selective attention in visual search is described as the gradual emergence of spatially specific and temporally sustained biases for representations of task-relevant visual objects in cortical maps.

  15. The effects of haloperidol on visual search, eye movements and psychomotor performance.

    PubMed

    Lynch, G; King, D J; Green, J F; Byth, W; Wilson-Davis, K

    1997-10-01

    The effects of single doses of haloperidol (2, 4 and 6 mg) were compared with lorazepam 2.5 mg and placebo in 15 healthy subjects. Visual search strategy was measured, along with a range of psychomotor and eye movement tests. Patients with Parkinson's disease have been shown to exhibit a shift from parallel to serial processing in visual search, but we demonstrated that this does not occur following administration of either haloperidol or lorazepam. Haloperidol was detected by visual analogue rating scales and peak saccadic velocity, the latter being the more sensitive measure. Haloperidol had no statistically significant effect on smooth pursuit position error, velocity error or saccadic intrusions. Digit symbol substitution performance was clearly diminished by haloperidol, but there was no effect on the continuous attention test. Lorazepam decreased performance in all tests apart from saccadic latency.

  16. Happy with a difference, unhappy with an identity: observers' mood determines processing depth in visual search.

    PubMed

    Grubert, Anna; Schmid, Petra; Krummenacher, Joseph

    2013-01-01

    Visual search for feature targets was employed to investigate whether the mechanisms underlying visual selective attention are modulated by observers' mood. The effects of induced mood on overall mean reaction times and on changes and repetitions of target-defining features and dimensions across consecutive trials were measured. The results showed that reaction times were significantly slower in the negative than in the positive and neutral mood groups. Furthermore, the results demonstrated that the processing stage that is activated to select visual information in a feature search task is modulated by the observer's mood. In participants with positive or neutral moods, dimension-specific, but no feature-specific, intertrial transition effects were found, suggesting that these observers based their responses on a salience signal coding the most conspicuous display location. Conversely, intertrial effects in observers in a negative mood were feature-specific in nature, suggesting that these participants accessed the feature identity level before responding. PMID:23079893

  17. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  18. Studying visual search using systems factorial methodology with target-distractor similarity as the factor.

    PubMed

    Fifić, Mario; Townsend, James T; Eidels, Ami

    2008-05-01

    Systems factorial technology (SFT) is a theory-driven set of methodologies oriented toward identification of basic mechanisms, such as parallel versus serial processing, of perception and cognition. Studies employing SFT in visual search with small display sizes have repeatedly shown decisive evidence for parallel processing. The first strong evidence for serial processing was recently found in short-term memory search, using target-distractor (T-D) similarity as a key experimental variable (Townsend & Fifić, 2004). One of the major goals of the present study was to employ T-D similarity in visual search to learn whether this mode of manipulating processing speed would affect the parallel versus serial issue in that domain. The result was a surprising and regular departure from ordinary parallel or serial processing. The most plausible account at present relies on the notion of positively interacting parallel channels.

  19. Hot spot detection based on feature space representation of visual search.

    PubMed

    Hu, Xiao-Peng; Dempere-Marco, Laura; Yang, Guang-Zhong

    2003-09-01

    This paper presents a new framework for capturing intrinsic visual search behavior of different observers in image understanding by analysing saccadic eye movements in feature space. The method is based on the information theory for identifying salient image features based on which visual search is performed. We demonstrate how to obtain feature space fixation density functions that are normalized to the image content along the scan paths. This allows a reliable identification of salient image features that can be mapped back to spatial space for highlighting regions of interest and attention selection. A two-color conjunction search experiment has been implemented to illustrate the theoretical framework of the proposed method including feature selection, hot spot detection, and back-projection. The practical value of the method is demonstrated with computed tomography image of centrilobular emphysema, and we discuss how the proposed framework can be used as a basis for decision support in medical image understanding.

  20. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc.

  1. Visual Intelligence: Using the Deep Patterns of Visual Language to Build Cognitive Skills

    ERIC Educational Resources Information Center

    Sibbet, David

    2008-01-01

    Thirty years of work as a graphic facilitator listening visually to people in every kind of organization has convinced the author that visual intelligence is a key to navigating an information economy rich with multimedia. He also believes that theory and disciplines developed by practitioners in this new field hold special promise for educators…

  2. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean

  3. The deployment of visual spatial attention during visual search predicts response time: electrophysiological evidence from the N2pc.

    PubMed

    Drisdelle, Brandi L; West, Greg L; Jolicoeur, Pierre

    2016-11-01

    We tracked the deployment of visual spatial attention, as indexed by an electrophysiological event-related potential named the N-2-posterior-contralateral (N2pc). We expected that a stronger and/or earlier deployment of attention would predict faster responses in a visual search task. We tested this hypothesis by sorting the electrophysiological segments into two categories (slow vs. fast) by trial-by-trial response times (RTs), for each participant, on the basis of the median RT within each condition of the experiment. We also classified participants on the basis of overall mean RTs into those faster than the group median and those slower than the group median. The N2pc was larger and earlier for fast responders compared with slow responders. Furthermore, within each of these groups, faster responses were associated with a larger and earlier N2pc. These results provide further evidence that the N2pc is a valid index of the deployment of visual attention, and suggest that a more effective deployment of visual spatial attention (larger and/or earlier N2pc) predicts a faster response, both within and between subjects. PMID:27648715

  4. Response Patterns of Children with Visual Impairments on Measures of Internalized Self-Responsibility.

    ERIC Educational Resources Information Center

    Spencer, Rebecca A.; Head, Daniel N.; Pysh, Margaret Van Dusen; Chalfant, James C.

    1997-01-01

    This study investigated the mastery-oriented and learned-helplessness response patterns of children (n=13) with visual impairments in grades 3 to 6 who were divided into two groups, low vision children who were visual learners and nonvisual learners. Subjects were given the Intellectual Achievement Responsibility Questionnaire. No significant…

  5. Use Patterns of Visual Cues in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…

  6. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  7. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes.

  8. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes. PMID:26892010

  9. Exploration on Building of Visualization Platform to Innovate Business Operation Pattern of Supply Chain Finance

    NASA Astrophysics Data System (ADS)

    He, Xiangjun; Tang, Lingyun

    Supply Chain Finance, as a new financing pattern, has been arousing general attentions of scholars at home and abroad since its publication. This paper describes the author's understanding towards supply chain finance, makes classification of its business patterns in China from different perspectives, analyzes the existing problems and deficiencies of the business patterns, and finally puts forward the notion of building a visualization platform to innovate the business operation patterns and risk control modes of domestic supply chain finance.

  10. Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

    2014-06-01

    Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

  11. Tactile search for change has less memory than visual search for change.

    PubMed

    Yoshida, Takako; Yamaguchi, Ayumi; Tsutsui, Hideomi; Wake, Tenji

    2015-05-01

    Haptic perception of a 2D image is thought to make heavy demands on working memory. During active exploration, humans need to store the latest local sensory information and integrate it with kinesthetic information from hand and finger locations in order to generate a coherent perception. This tactile integration has not been studied as extensively as visual shape integration. In the current study, we compared working-memory capacity for tactile exploration to that of visual exploration as measured in change-detection tasks. We found smaller memory capacity during tactile exploration (approximately 1 item) compared with visual exploration (2-10 items). These differences generalized to position memory and could not be attributed to insufficient stimulus-exposure durations, acuity differences between modalities, or uncertainty over the position of items. This low capacity for tactile memory suggests that the haptic system is almost amnesic when outside the fingertips and that there is little or no cross-position integration.

  12. Masked target transform volume clutter metric for human observer visual search modeling

    NASA Astrophysics Data System (ADS)

    Moore, Richard Kirk

    The Night Vision and Electronic Sensors Directorate (NVESD) develops an imaging system performance model to aid in the design and comparison of imaging systems for military use. It is intended to approximate visual task performance for a typical human observer with an imaging system of specified optical, electrical, physical, and environmental parameters. When modeling search performance, the model currently uses only target size and target-to-background contrast to describe a scene. The presence or absence of other non-target objects and textures in the scene also affect search performance, but NVESD's targeting task performance metric based time limited search model (TTP/TLS) does not currently account for them explicitly. Non-target objects in a scene that impact search performance are referred to as clutter. A universally accepted mathematical definition of clutter does not yet exist. Researchers have proposed a number of clutter metrics based on very different methods, but none account for display geometry or the varying spatial frequency sensitivity of the human visual system. After a review of the NVESD search model, properties of the human visual system, and a literature review of clutter metrics, the new masked target transform volume clutter metric will be presented. Next the results of an experiment designed to show performance variation due to clutter alone will be presented. Then, the results of three separate perception experiments using real or realistic search imagery will be used to show that the new clutter metric better models human observer search performance than the current NVESD model or any of the reviewed clutter metrics.

  13. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  14. Basic visual function and cortical thickness patterns in posterior cortical atrophy.

    PubMed

    Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J

    2011-09-01

    Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.

  15. The boundary conditions of priming of visual search: from passive viewing through task-relevant working memory load.

    PubMed

    Kristjánsson, Arni; Saevarsson, Styrmir; Driver, Jon

    2013-06-01

    Priming of visual search has a dominating effect upon attentional shifts and is thought to play a decisive role in visual stability. Despite this importance, the nature of the memory underlying priming remains controversial. To understand more fully the necessary conditions for priming, we contrasted passive versus active viewing of visual search arrays. There was no priming from passive viewing of search arrays, while it was strong for active search of the same displays. Displays requiring no search resulted in no priming, again showing that search is needed for priming to occur. Finally, we introduced working memory load during visual search in an effort to disrupt priming. The memorized items had either the same colors as or different colors from the visual search items. Retaining items in working memory inhibited priming of the working memory task-relevant colors, while little interference was observed for unrelated colors. The picture that emerges of priming is that it requires active attentional processing of the search items in addition to the operation of visual working memory, where the task relevance of the working memory load plays a key role.

  16. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  17. Visual search efficiency is greater for human faces compared to animal faces.

    PubMed

    Simpson, Elizabeth A; Husband, Haley L; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V

    2014-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similar searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than fixations on primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage.

  18. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  19. Visual search efficiency is greater for human faces compared to animal faces.

    PubMed

    Simpson, Elizabeth A; Husband, Haley L; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V

    2014-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similar searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than fixations on primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  20. Visual Ability and Searching Behavior of Adult Laricobius nigrinus, a Hemlock Woolly Adelgid Predator

    PubMed Central

    Mausel, D.L.; Salom, S.M.; Kok, L.T.

    2011-01-01

    Very little is known about the searching behavior and sensory cues that Laricobius spp. (Coleoptera: Derodontidae) predators use to locate suitable habitats and prey, which limits our ability to collect and monitor them for classical biological control of adelgids (Hemiptera: Adelgidae). The aim of this study was to examine the visual ability and the searching behavior of newly emerged L. nigrinus Fender, a host-specific predator of the hemlock woolly adelgid, Adelges tsugae Annand (Hemiptera: Phylloxeroidea: Adelgidae). In a laboratory bioassay, individual adults attempting to locate an uninfested eastern hemlock seedling under either light or dark conditions were observed in an arena. In another bioassay, individual adults searching for prey on hemlock seedlings (infested or uninfested) were continuously video-recorded. Beetles located and began climbing the seedling stem in light significantly more than in dark, indicating that vision is an important sensory modality. Our primary finding was that searching behavior of L. nigrinus, as in most species, was related to food abundance. Beetles did not fly in the presence of high A. tsugae densities and flew when A. tsugae was absent, which agrees with observed aggregations of beetles on heavily infested trees in the field. At close range of prey, slow crawling and frequent turning suggest the use of non-visual cues such as olfaction and contact chemoreception. Based on the beetles' visual ability to locate tree stems and their climbing behavior, a bole trap may be an effective collection and monitoring tool. PMID:22220637

  1. Visual cluster analysis and pattern recognition template and methods

    SciTech Connect

    Osbourn, G.C.; Martinez, R.F.

    1993-12-31

    This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  2. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  3. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  4. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  5. Clarifying the role of pattern separation in schizophrenia: the role of recognition and visual discrimination deficits.

    PubMed

    Martinelli, Cristina; Shergill, Sukhwinder S

    2015-08-01

    Patients with schizophrenia show marked memory deficits which have a negative impact on their functioning and life quality. Recent models suggest that such deficits might be attributable to defective pattern separation (PS), a hippocampal-based computation involved in the differentiation of overlapping stimuli and their mnemonic representations. One previous study on the topic concluded in favour of pattern separation impairments in the illness. However, this study did not clarify whether more elementary recognition and/or visual discrimination deficits could explain observed group differences. To address this limitation we investigated pattern separation in 22 schizophrenic patients and 24 healthy controls with the use of a task requiring individuals to classify stimuli as repetitions, novel or similar compared to a previous familiarisation phase. In addition, we employed a visual discrimination task involving perceptual similarity judgments on the same images. Results revealed impaired performance in the patient group; both on baseline measure of pattern separation as well as an index of pattern separation rigidity. However, further analyses demonstrated that such differences could be fully explained by recognition and visual discrimination deficits. Our findings suggest that pattern separation in schizophrenia is predicated on earlier recognition and visual discrimination problems. Furthermore, we demonstrate that future studies on pattern separation should include appropriate measures of recognition and visual discrimination performance for the correct interpretation of their findings.

  6. Visual search and urban driving under the influence of marijuana and alcohol.

    PubMed

    Lamers, C. T. J.; Ramaekers, J. G.

    2001-07-01

    The purpose of the present study was to assess the effects of low doses of marijuana and alcohol, and their combination, on visual search at intersections and on general driving proficiency in the City Driving Test. Sixteen recreational users of alcohol and marijuana (eight males and eight females) were treated with these substances or placebo according to a balanced, 4-way, cross-over, observer- and subject-blind design. On separate evenings, subjects received weight-calibrated doses of THC, alcohol or placebo in each of the following treatment conditions: alcohol placebo + THC placebo, alcohol + THC placebo, THC 100 &mgr;g/kg + alcohol placebo, THC 100 &mgr;g/kg + alcohol. Alcohol doses administered were sufficient for achieving a blood alcohol concentration (BAC) of about 0.05 g/dl. Initial drinking preceded smoking by one hour. The City Driving Test commenced 15 minutes after smoking and lasted 45 minutes. The test was conducted over a fixed route within the city limits of Maastricht. An eye movement recording system was mounted on each subject's head for providing relative frequency measures of appropriate visual search at intersections. General driving quality was rated by a licensed driving instructor on a shortened version of the Royal Dutch Tourist Association's Driving Proficiency Test. After placebo treatment subjects searched for traffic approaching from side streets on the right in 84% of all cases. Visual search frequency in these subjects did not change when they were treated with alcohol or marijuana alone. However, when treated with the combination of alcohol and marijuana, the frequency of visual search dropped by 3%. Performance as rated on the Driving Proficiency Scale did not differ between treatments. It was concluded that the effects of low doses of THC (100 &mgr;g/kg) and alcohol (BAC < 0.05 g/dl) on higher-level driving skills as measured in the present study are minimal. Copyright 2001 John Wiley & Sons, Ltd. PMID:12404559

  7. Activity in V4 reflects the direction, but not the latency, of saccades during visual search.

    PubMed

    Gee, Angela L; Ipata, Anna E; Goldberg, Michael E

    2010-10-01

    We constantly make eye movements to bring objects of interest onto the fovea for more detailed processing. Activity in area V4, a prestriate visual area, is enhanced at the location corresponding to the target of an eye movement. However, the precise role of activity in V4 in relation to these saccades and the modulation of other cortical areas in the oculomotor system remains unknown. V4 could be a source of visual feature information used to select the eye movement, or alternatively, it could reflect the locus of spatial attention. To test these hypotheses, we trained monkeys on a visual search task in which they were free to move their eyes. We found that activity in area V4 reflected the direction of the upcoming saccade but did not predict the latency of the saccade in contrast to activity in the lateral intraparietal area (LIP). We suggest that the signals in V4, unlike those in LIP, are not directly involved in the generation of the saccade itself but rather are more closely linked to visual perception and attention. Although V4 and LIP have different roles in spatial attention and preparing eye movements, they likely perform complimentary processes during visual search. PMID:20610790

  8. Color is processed less efficiently than orientation in change detection but more efficiently in visual search.

    PubMed

    Huang, Liqiang

    2015-05-01

    Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features.

  9. Learning from data: recognizing glaucomatous defect patterns and detecting progression from visual field measurements.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Balasubramanian, Madhusudhanan; Medeiros, Felipe A; Zangwill, Linda M; Liebmann, Jeffrey M; Girkin, Christopher A; Weinreb, Robert N; Bowd, Christopher

    2014-07-01

    A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods.

  10. Learning from data: recognizing glaucomatous defect patterns and detecting progression from visual field measurements.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Balasubramanian, Madhusudhanan; Medeiros, Felipe A; Zangwill, Linda M; Liebmann, Jeffrey M; Girkin, Christopher A; Weinreb, Robert N; Bowd, Christopher

    2014-07-01

    A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods. PMID:24710816

  11. Blaming the victims of your own mistakes: How visual search accuracy influences evaluation of stimuli.

    PubMed

    Chetverikov, Andrey; Jóhannesson, Ómar I; Kristjánsson, Árni

    2015-01-01

    Even without explicit positive or negative reinforcement, experiences may influence preferences. According to the affective feedback in hypotheses testing account preferences are determined by the accuracy of hypotheses: correct hypotheses evoke positive affect, while incorrect ones evoke negative affect facilitating changes of hypotheses. Applying this to visual search, we suggest that accurate search should lead to more positive ratings of targets than distractors, while for errors targets should be rated more negatively. We test this in two experiments using time-limited search for a conjunction of gender and tint of faces. Accurate search led to more positive ratings for targets as compared to distractors or targets following errors. Errors led to more negative ratings for targets than for distractors. Critically, eye tracking revealed that the longer the fixation dwell times in target regions, the higher the target ratings for correct responses, and the lower the ratings for errors. The longer observers look at targets, the more positive their ratings if they answer correctly, and less positive, following errors. The findings support the affective feedback account and provide the first demonstration of negative effects on liking ratings following errors in visual search.

  12. Adding a Visualization Feature to Web Search Engines: It’s Time

    SciTech Connect

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  13. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  14. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  15. A modified mirror projection visual evoked potential stimulator for presenting patterns in different orientations.

    PubMed

    Taylor, P K; Wynn-Williams, G M

    1986-07-01

    Modifications to a standard mirror projection visual evoked potential stimulator are described to enable projection of patterns in varying orientations. The galvanometer-mirror assembly is mounted on an arm which can be rotated through 90 degrees. This enables patterns in any orientation to be deflected perpendicular to their axes. PMID:2424725

  16. Increased Vulnerability to Pattern-Related Visual Stress in Myalgic Encephalomyelitis.

    PubMed

    Wilson, Rachel L; Paterson, Kevin B; Hutchinson, Claire V

    2015-12-01

    The objective of this study was to determine vulnerability to pattern-related visual stress in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS). A total of 20 ME/CFS patients and 20 matched (age, gender) controls were recruited to the study. Pattern-related visual stress was determined using the Pattern Glare Test. Participants viewed three patterns, the spatial frequencies (SF) of which were 0.3 (low-SF), 2.3 (mid-SF), and 9.4 (high-SF) cycles per degree (c/deg). They reported the number of distortions they experienced when viewing each pattern. ME/CFS patients exhibited significantly higher pattern glare scores than controls for the mid-SF pattern. Mid-high SF differences were also significantly higher in patients than controls. These findings provide evidence of altered visual perception in ME/CFS. Pattern-related visual stress may represent an identifiable clinical feature of ME/CFS that will prove useful in its diagnosis. However, further research is required to establish if these symptoms reflect ME/CFS-related changes in the functioning of sensory neural pathways.

  17. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children

    PubMed Central

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  18. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children.

    PubMed

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5-7, 8-10, and 11-15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  19. Visual control of movement patterns and the grammar of action.

    PubMed

    Smyth, M M

    1989-05-01

    In this experiment adult subjects copied three types of material (letters, reversed letters and geometric shapes) with and without sight of the hand and the writing trace. Without vision the number of movement segments decreased and the sequence and direction of movements were altered. This means that subjects did not use a fixed stored representation to produce items nor did they obey the rules of Goodnow and Levine's (1973) grammar of action. When spatial location is made more difficult by the removal of vision, movement production is simplified to reduce the number of relocations required. The use of consistent directions of movement depends on the ability to use visual control of spatial location.

  20. Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location.

    PubMed

    Pollmann, Stefan; Eštočinová, Jana; Sommer, Susanne; Chelazzi, Leonardo; Zinke, Wolf

    2016-01-01

    Spatial contextual cueing reflects an incidental form of learning that occurs when spatial distractor configurations are repeated in visual search displays. Recently, it was reported that the efficiency of contextual cueing can be modulated by reward. We replicated this behavioral finding and investigated its neural basis with fMRI. Reward value was associated with repeated displays in a learning session. The effect of reward value on context-guided visual search was assessed in a subsequent fMRI session without reward. Structures known to support explicit reward valuation, such as ventral frontomedial cortex and posterior cingulate cortex, were modulated by incidental reward learning. Contextual cueing, leading to more efficient search, went along with decreased activation in the visual search network. Retrosplenial cortex played a special role in that it showed both a main effect of reward and a reward×configuration interaction and may thereby be a central structure for the reward modulation of context-guided visual search.

  1. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.

  2. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  3. Visualization of cortical lamination patterns with magnetic resonance imaging.

    PubMed

    Barazany, Daniel; Assaf, Yaniv

    2012-09-01

    The ability to image the cortex laminar arrangements in vivo is one of the holy grails of neuroscience. Recent studies have visualized the cortical layers ex vivo and in vivo (on a small region of interest) using high-resolution T(1)/T(2) magnetic resonance imaging (MRI). In this study, we used inversion-recovery (IR) MRI to increase the sensitivity of MRI toward cortical architecture and achieving whole-brain characterization of the layers, in vivo, in 3D on humans and rats. Using the IR measurements, we computed 3D signal intensity plots along the cortex termed corticograms to characterize cortical substructures. We found that cluster analyses of the multi-IR images along the cortex divides it into at least 6 laminar compartments. To validate our observations, we compared the IR-MRI analysis with histology and revealed a correspondence, although these 2 measures do not represent similar quantities. The abilities of the method to segment the cortex into layers were demonstrated on the striate cortex (visualizing the stripe of Gennari) and on the frontal cortex. We conclude that the presented methodology can serve as means to study and characterize individual cortical architecture and organization.

  4. Levels of feature analysis in processing visual patterns.

    PubMed

    Ward, L M; Wexler, D A

    1976-01-01

    In this paper, a revised Pandemonium-like model of visual-feature processing is formulated and a preliminary test of its feasibility is reported. The model differentiates visual-feature processing into a series of hierarchical stages organized by increasing complexity, with the output of each stage going both to the next higher stage, and directly to a more central processor. In the experiment, subjects sorted decks of cards into piles according to the presence or absence of a target stimulus which differed from nontargets in a variety of different features; detection of a feature was sufficient for detection of a target. The data generally supported the revised Pandemonium model, in that targets which differed from nontargets in features thought to be low in the hierarchy were processed faster than targets whose difference was in a high level feature. An extension of the revised model did somewhat less well in predicting the results of sorting for targets in which detection of any one of several features was sufficient for target detection.

  5. [Discordant pattern, visual identification of myocardial viability with PET].

    PubMed

    Alexánderson, E; Ricalde, A; Zerón, J; Talayero, J A; Cruz, P; Adame, G; Mendoza, G; Meave, A

    2006-01-01

    PET (positron emission tomography) as a non-invasive imaging method for studying cardiac perfusion and metabolism has turned into the gold standard for detecting myocardial viability. The utilization of 18 FDG as a tracer for its identification permits to spot the use of exogenous glucose by the myocardium segments. By studying and comparing viability and perfusion results, for which the latter uses tracers such as 13N-ammonia, three different patterns for myocardial viability evaluation arise:. transmural concordant pattern, non-transmural concordant pattern, and the discordant pattern; the last one exemplifies the hibernating myocardium and proves the presence of myocardial viability. The importance of its detection is fundamental for the study of an ischemic patient, since it permits the establishment of and exact diagnosis, prognosis, and the best treatment option. It also allows foreseeing functional recovery of the affected region as well as the ejection fraction rate after revascularization treatment if this is determined as necessary. All these elements regarding viability are determinant in order to reduce adverse events and help improving patients' prognosis. PMID:17315610

  6. The NLP Swish Pattern: An Innovative Visualizing Technique.

    ERIC Educational Resources Information Center

    Masters, Betsy J.; And Others

    1991-01-01

    Describes swish pattern, one of many innovative therapeutic interventions that developers of neurolinguistic programing (NLP) have contributed to counseling profession. Presents brief overview of NLP followed by an explanation of the basic theory and expected outcomes of the swish. Presents description of the intervention process and case studies…

  7. Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data.

    PubMed

    Bach, Benjamin; Shi, Conglei; Heulot, Nicolas; Madhyastha, Tara; Grabowski, Tom; Dragicevic, Pierre

    2016-01-01

    We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets.

  8. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  9. Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    PubMed Central

    Morvan, Camille; Maloney, Laurence T.

    2012-01-01

    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428

  10. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  11. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the

  12. Improvement in Visual Search with Practice: Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    PubMed Central

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus–response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior–contralateral component (N2pc, 170–250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300–400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. PMID:25834059

  13. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    PubMed Central

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  14. Improvement in visual search with practice: mapping learning-related changes in neurocognitive stages of processing.

    PubMed

    Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G

    2015-04-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.

  15. The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.

    PubMed

    Grubert, Anna; Eimer, Martin

    2016-02-01

    Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.

  16. Influence of being videotaped on the prevalence effect during visual search

    PubMed Central

    Miyazaki, Yuki

    2015-01-01

    Video monitoring modifies the task performance of those who are being monitored. The current study aims to prevent rare target-detection failures during visual search through the use of video monitoring. Targets are sometimes missed when their prevalence during visual search is extremely low (e.g., in airport baggage screenings). Participants performed a visual search in which they were required to discern the presence of a tool in the midst of other objects. The participants were monitored via video cameras as they performed the task in one session (the videotaped condition), and they performed the same task in another session without being monitored (the non-videotaped condition). The results showed that fewer miss errors occurred in the videotaped condition, regardless of target prevalence. It appears that the decrease in misses in the video monitoring condition resulted from a shift in criterion location. Video monitoring is considered useful in inducing accurate scanning. It is possible that the potential for evaluation involved in being observed motivates the participants to perform well and is related to the shift in criterion. PMID:25999895

  17. Spatial ranking strategy and enhanced peripheral vision discrimination optimize performance and efficiency of visual sequential search.

    PubMed

    Veneri, Giacomo; Pretegiani, Elena; Fargnoli, Francesco; Rosini, Francesca; Vinciguerra, Claudia; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra

    2014-09-01

    Visual sequential search might use a peripheral spatial ranking of the scene to put the next target of the sequence in the correct order. This strategy, indeed, might enhance the discriminative capacity of the human peripheral vision and spare neural resources associated with foveation. However, it is not known how exactly the peripheral vision sustains sequential search and whether the sparing of neural resources has a cost in terms of performance. To elucidate these issues, we compared strategy and performance during an alpha-numeric sequential task where peripheral vision was modulated in three different conditions: normal, blurred, or obscured. If spatial ranking is applied to increase the peripheral discrimination, its use as a strategy in visual sequencing should differ according to the degree of discriminative information that can be obtained from the periphery. Moreover, if this strategy spares neural resources without impairing the performance, its use should be associated with better performance. We found that spatial ranking was applied when peripheral vision was fully available, reducing the number and time of explorative fixations. When the periphery was obscured, explorative fixations were numerous and sparse; when the periphery was blurred, explorative fixations were longer and often located close to the items. Performance was significantly improved by this strategy. Our results demonstrated that spatial ranking is an efficient strategy adopted by the brain in visual sequencing to highlight peripheral detection and discrimination; it reduces the neural cost by avoiding unnecessary foveations, and promotes sequential search by facilitating the onset of a new saccade.

  18. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  19. Evaluating the human ongoing visual search performance by eye tracking application and sequencing tests.

    PubMed

    Veneri, Giacomo; Pretegiani, Elena; Rosini, Francesca; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra

    2012-09-01

    Human visual search is an everyday activity that enables humans to explore the real world. Given the visual input, during a visual search, it is necessary to select some aspects of input to shift the gaze to next target. The aim of the study is to develop a mathematical method able to evaluate the visual selection process during the execution of a high cognitively demanding task such as the trial making test part B (TMT). The TMT is a neuro-psychological instrument where numbers and letters should be connected to each other in numeric and alphabetic order. We adapted the TMT to an eye-tracking version, and we used a vector model, the "eight pointed star" (8PS), to discover how selection (fixations) guides next exploration (saccades) and how human top-down factors interact with bottom-up saliency. The results reported a trend to move away from the last fixations correlated to the number of distracters and the execution performance.

  20. Modeling the Effect of Selection History on Pop-Out Visual Search

    PubMed Central

    Tseng, Yuan-Chi; Glaser, Joshua I.; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  1. Onset of background dynamic noise attenuates preview benefit in inefficient visual search.

    PubMed

    Osugi, Takayuki; Murakami, Ikuya

    2015-07-01

    When certain distractors (old items) appear before others (new items) during an inefficient visual search task, observers exclude the old items from the search (preview benefit), possibly because their locations are deprioritized relative to the locations of the new items. We examined whether participants were able to ignore task-irrelevant changes in a scene (i.e., the onset of repetitive changes, continual repetitive changes, and the cessation of repetitive changes in the background), while performing a preview search task. The results indicated that, when the noise continually changed position throughout each trial, or when dynamic noise was changed to static noise simultaneous with the appearance of the search display, the preview benefit remained. In contrast, when the static background noise was changed to dynamic background noise, simultaneous with the appearance of the search display, this task-irrelevant background event abolished the preview benefit on search efficiency. Therefore, we conclude that the onset of task-irrelevant repetitive changes in the background disrupts the process of inhibitory marking of old items.

  2. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    PubMed

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. PMID:26191616

  3. How much agreement is there in the visual search strategy of experts reading mammograms?

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia

    2008-03-01

    Previously we have shown that the eyes of expert breast imagers are attracted to the location of a malignant mass in a mammogram in less than 2 seconds after image onset. Moreover, the longer they take to visually fixate the location of the mass, the less likely it is that they will report it. We conjectured that this behavior was due to the formation of the initial hypothesis about the image (i.e., 'normal' - no lesions to report, or 'abnormal' - possible lesions to report). This initial hypothesis is formed as a result of a difference template between the experts' expectations of the image and the actual image. Hence, when the image is displayed, the expert detects the areas that do not correspond to their 'a priori expectation', and these areas get assigned weights according to the magnitude of the perturbation. The radiologist then uses eye movements to guide the high resolution fovea to each of these locations, in order to resolve each perturbation. To accomplish this task successfully the radiologist uses not only the local features in the area but also lateral comparisons with selected background locations, and this comprises the radiologist's visual search strategy. Eye-position tracking studies seem to suggest that no two radiologists search the breast parenchyma alike, which makes one wonder whether successful search models can be developed. In this study we show that there is more to the experts' search strategy than meets the eye.

  4. Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.

    PubMed

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria

    2014-11-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information. PMID:24898908

  5. Training shortens search times in children with visual impairment accompanied by nystagmus

    PubMed Central

    Huurneman, Bianca; Boonstra, F. Nienke

    2014-01-01

    Perceptual learning (PL) can improve near visual acuity (NVA) in 4–9 year old children with visual impairment (VI). However, the mechanisms underlying improved NVA are unknown. The present study compares feature search and oculomotor measures in 4–9 year old children with VI accompanied by nystagmus (VI+nys [n = 33]) and children with normal vision (NV [n = 29]). Children in the VI+nys group were divided into three training groups: an experimental PL group, a control PL group, and a magnifier group. They were seen before (baseline) and after 6 weeks of training. Children with NV were only seen at baseline. The feature search task entailed finding a target E among distractor E's (pointing right) with element spacing varied in four steps: 0.04°, 0.5°, 1°, and 2°. At baseline, children with VI+nys showed longer search times, shorter fixation durations, and larger saccade amplitudes than children with NV. After training, all training groups showed shorter search times. Only the experimental PL group showed prolonged fixation duration after training at 0.5° and 2° spacing, p's respectively 0.033 and 0.021. Prolonged fixation duration was associated with reduced crowding and improved crowded NVA. One of the mechanisms underlying improved crowded NVA after PL in children with VI+nys seems to be prolonged fixation duration. PMID:25309473

  6. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

    PubMed

    Wahn, Basil; Schwandt, Jessika; Krüger, Matti; Crafa, Daina; Nunnendorf, Vanessa; König, Peter

    2016-06-01

    In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.

  7. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data.

  8. Pattern-reversal electroretinograms for the diagnosis and management of disorders of the anterior visual pathway.

    PubMed

    Hokazono, Kenzo; Oyamada, Maria Kiyoko; Monteiro, Mário Luiz Ribeiro

    2011-01-01

    The pattern electroretinogram is an electrophysiological test that assesses the function of inner retinal layers, particularly the ganglion cells layer of retina, using a reversing checkerboard or grating pattern that produces no change in average luminance over time. The normal pattern electroretinogram is composed of a proeminent positive component (P50) and a large later negative component (N95). Since structural damage that compromises the retinal ganglion cell layer can lead to pattern electroretinogram changes, particularly in the N95 amplitude, the test can be useful in the treatment of a number of anterior visual pathway diseases. In this article, we review the methods for recording pattern electroretinogram and its usefulness in the diagnosis and management of diseases including inflammatory, hereditary, ischemic and compressive lesions of the anterior visual pathway. PMID:21915454

  9. Sequential patterns mining and gene sequence visualization to discover novelty from microarray data.

    PubMed

    Sallaberry, A; Pecheur, N; Bringay, S; Roche, M; Teisseire, M

    2011-10-01

    Data mining allow users to discover novelty in huge amounts of data. Frequent pattern methods have proved to be efficient, but the extracted patterns are often too numerous and thus difficult to analyze by end users. In this paper, we focus on sequential pattern mining and propose a new visualization system to help end users analyze the extracted knowledge and to highlight novelty according to databases of referenced biological documents. Our system is based on three visualization techniques: clouds, solar systems, and treemaps. We show that these techniques are very helpful for identifying associations and hierarchical relationships between patterns among related documents. Sequential patterns extracted from gene data using our system were successfully evaluated by two biology laboratories working on Alzheimer's disease and cancer.

  10. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and

  11. Display symmetry affects positional specificity in same-different judgment of pairs of novel visual patterns.

    PubMed

    Dill, M; Fahle, M

    1999-11-01

    Deciding whether a novel visual pattern is the same as or different from a previously seen reference is easier if both stimuli are presented to the same rather than to different locations in the field of view (Foster & Kahn (1985). Biological Cybernetics, 51, 305-312; Dill & Fahle (1998). Perception and Psychophysics, 60, 65-81). We investigated whether pattern symmetry interacts with the effect of translation. Patterns were small dot-clouds which could be mirror-symmetric or asymmetric. Translations were displacements of the visual pattern symmetrically across the fovea, either left-right or above-below. We found that same-different discriminations were worse (less accurate and slower) for translated patterns, to an extent which in general was not influenced by pattern symmetry, or pattern orientation, or direction of displacement. However, if the displaced pattern was a mirror image of the original one (along the trajectory of the displacement), then performance was largely invariant to translation. Both positional specificity and its reduction in symmetric displays may be explained by location-specific pre-processing of the visual input.

  12. Visualization of roaming client/server connection patterns during a wirelessly enabled disaster response drill.

    PubMed

    Calvitti, Alan; Lenert, Leslie A; Brown, Steven W

    2006-01-01

    Assessment of how well a multiple client server system is functioning is a difficult task. In this poster we present visualization tools for such assessments. Arranged on a timeline, UDP client connection events are point-like. TCP client events are structured into intervals. Informative patterns and correlations are revealed by both sets. For the latter, comparison of two visualization schemes on the same timeline yields additional insights.

  13. Job Search Patterns of College Graduates: The Role of Social Capital

    ERIC Educational Resources Information Center

    Coonfield, Emily S.

    2012-01-01

    This dissertation addresses job search patterns of college graduates and the implications of social capital by race and class. The purpose of this study is to explore (1) how the job search transpires for recent college graduates, (2) how potential social networks in a higher educational context, like KU, may make a difference for students with…

  14. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  15. Using multidimensional scaling to quantify similarity in visual search and beyond.

    PubMed

    Hout, Michael C; Godwin, Hayward J; Fitzsimmons, Gemma; Robbins, Arryn; Menneer, Tamaryn; Goldinger, Stephen D

    2016-01-01

    Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one's keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multi-dimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique. PMID:26494381

  16. The effects of anxiety and strategic planning on visual search behaviour.

    PubMed

    Moran, Aidan; Byrne, Alison; McGlade, Nicola

    2002-03-01

    The past decade has witnessed increased interest in the visual search behaviour of athletes. Little is known, however, about the relationship between anxiety and eye movements in sport performers or about the extent to which athletes' planned and actual visual search strategies correspond. To address these issues, we conducted two studies. In Study 1, eight expert female gymnasts were presented with three digital slides of a model performing a skill that is known to be anxiety-provoking in this sport--namely, the 'back flip' on the beam. By varying the height of the beam and the presence or absence of safety mats, the slides differed in the amount of anxiety that they elicited vicariously in the viewer. In the study, the gymnasts were asked to imagine themselves in the position of the depicted model and to describe the anxiety that they felt. As they viewed the slides, their eye movements were recorded. As predicted, anxiety was associated with an increase in the number of fixations to peripheral areas. In addition, the more 'threatening' slides elicited significantly more fixations than the less feared images. In Study 2, the plans of 15 equestrian performers (5 expert, 5 intermediate and 5 novice) were elicited as they engaged in a virtual 'walk' around a computerized show-jumping course. Contrary to expectations, the congruence between intended and actual search behaviour was not significantly greater for expert riders than for the less skilled groups. Also, the fact that the top riders allocated more fixations to slides than the less skilled performers challenged the prediction that expertise would be associated with economy of visual search. Finally, as expected, the expert riders were significantly less dependent on the overall 'course plan' than the intermediate and novice equestrian performers when inspecting the fences. PMID:11999478

  17. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record

  18. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record PMID:27213833

  19. Gender Differences in Patterns of Searching the Web

    ERIC Educational Resources Information Center

    Roy, Marguerite; Chi, Michelene T. H.

    2003-01-01

    There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

  20. Optimization of boiling water reactor control rod patterns using linear search

    SciTech Connect

    Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.

    1984-10-01

    A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor.

  1. Identification of the ideal clutter metric to predict time dependence of human visual search

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.

  2. The impact of clinical indications on visual search behaviour in skeletal radiographs

    NASA Astrophysics Data System (ADS)

    Rutledge, A.; McEntee, M. F.; Rainford, L.; O'Grady, M.; McCarthy, K.; Butler, M. L.

    2011-03-01

    The hazards associated with ionizing radiation have been documented in the literature and therefore justifying the need for X-ray examinations has come to the forefront of the radiation safety debate in recent years1. International legislation states that the referrer is responsible for the provision of sufficient clinical information to enable the justification of the medical exposure. Clinical indications are a set of systematically developed statements to assist in accurate diagnosis and appropriate patient management2. In this study, the impact of clinical indications upon fracture detection for musculoskeletal radiographs is analyzed. A group of radiographers (n=6) interpreted musculoskeletal radiology cases (n=33) with and without clinical indications. Radiographic images were selected to represent common trauma presentations of extremities and pelvis. Detection of the fracture was measured using ROC methodology. An eyetracking device was employed to record radiographers search behavior by analysing distinct fixation points and search patterns, resulting in a greater level of insight and understanding into the influence of clinical indications on observers' interpretation of radiographs. The influence of clinical information on fracture detection and search patterns was assessed. Findings of this study demonstrate that the inclusion of clinical indications result in impressionable search behavior. Differences in eye tracking parameters were also noted. This study also attempts to uncover fundamental observer search strategies and behavior with and without clinical indications, thus providing a greater understanding and insight into the image interpretation process. Results of this study suggest that availability of adequate clinical data should be emphasized for interpreting trauma radiographs.

  3. Visualizing Nanoscopic Topography and Patterns in Freely Standing Thin Films

    NASA Astrophysics Data System (ADS)

    Sharma, Vivek; Zhang, Yiran; Yilixiati, Subinuer

    Thin liquid films containing micelles, nanoparticles, polyelectrolyte-surfactant complexes and smectic liquid crystals undergo thinning in a discontinuous, step-wise fashion. The discontinuous jumps in thickness are often characterized by quantifying changes in the intensity of reflected monochromatic light, modulated by thin film interference from a region of interest. Stratifying thin films exhibit a mosaic pattern in reflected white light microscopy, attributed to the coexistence of domains with various thicknesses, separated by steps. Using Interferometry Digital Imaging Optical Microscopy (IDIOM) protocols developed in the course of this study, we spatially resolve for the first time, the landscape of stratifying freely standing thin films. We distinguish nanoscopic rims, mesas and craters, and follow their emergence and growth. In particular, for thin films containing micelles of sodium dodecyl sulfate (SDS), these topological features involve discontinuous, thickness transitions with concentration-dependent steps of 5-25 nm. These non-flat features result from oscillatory, periodic, supramolecular structural forces that arise in confined fluids, and arise due to complex coupling of hydrodynamic and thermodynamic effects at the nanoscale.

  4. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    PubMed

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals.

  5. A Visualization System for Space-Time and Multivariate Patterns (VIS-STAMP)

    PubMed Central

    Guo, Diansheng; Chen, Jin; MacEachren, Alan M.; Liao, Ke

    2011-01-01

    The research reported here integrates computational, visual, and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial, and temporal dimensions via clustering, sorting, and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display, and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and, through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 Contest data set, which contains time-varying, geographically referenced, and multivariate data for technology companies in the US. PMID:17073369

  6. Color names, color categories, and color-cued visual search: sometimes, color perception is not categorical.

    PubMed

    Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M

    2011-01-01

    The relation between colors and their names is a classic case study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). A. L. Gilbert, T. Regier, P. Kay, and R. B. Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green-Blue color boundary and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models nor any of our data suggested categorical perception of colors at the Green-Blue boundary, in either visual field.

  7. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  8. Building ensemble representations: How the shape of preceding distractor distributions affects visual search.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2016-08-01

    Perception allows us to extract information about regularities in the environment. Observers can quickly determine summary statistics of a group of objects and detect outliers. The existing body of research has, however, not revealed how such ensemble representations develop over time. Moreover, the correspondence between the physical distribution of features in the external world and their potential internal representation as a probability density function (PDF) by the visual system is still unknown. Here, for the first time we demonstrate that such internal PDFs are built during visual search and show how they can be assessed with repetition and role-reversal effects. Using singleton search for an oddly oriented target line among differently oriented distractors (a priming of pop-out paradigm), we test how different properties of previously observed distractor distributions (mean, variability, and shape) influence search times. Our results indicate that observers learn properties of distractor distributions over and above mean and variance; in fact, response times also depend on the shape of the preceding distractor distribution. Response times decrease as a function of target distance from the mean of preceding Gaussian distractor distributions, and the decrease is steeper when preceding distributions have small standard deviations. When preceding distributions are uniform, however, this decrease in response times can be described by a two-piece function corresponding to the uniform distribution PDF. Moreover, following skewed distributions response times function is skewed in accordance with the skew in distributions. Indeed, internal PDFs seem to be specifically tuned to the observed feature distribution. PMID:27232163

  9. Building ensemble representations: How the shape of preceding distractor distributions affects visual search.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2016-08-01

    Perception allows us to extract information about regularities in the environment. Observers can quickly determine summary statistics of a group of objects and detect outliers. The existing body of research has, however, not revealed how such ensemble representations develop over time. Moreover, the correspondence between the physical distribution of features in the external world and their potential internal representation as a probability density function (PDF) by the visual system is still unknown. Here, for the first time we demonstrate that such internal PDFs are built during visual search and show how they can be assessed with repetition and role-reversal effects. Using singleton search for an oddly oriented target line among differently oriented distractors (a priming of pop-out paradigm), we test how different properties of previously observed distractor distributions (mean, variability, and shape) influence search times. Our results indicate that observers learn properties of distractor distributions over and above mean and variance; in fact, response times also depend on the shape of the preceding distractor distribution. Response times decrease as a function of target distance from the mean of preceding Gaussian distractor distributions, and the decrease is steeper when preceding distributions have small standard deviations. When preceding distributions are uniform, however, this decrease in response times can be described by a two-piece function corresponding to the uniform distribution PDF. Moreover, following skewed distributions response times function is skewed in accordance with the skew in distributions. Indeed, internal PDFs seem to be specifically tuned to the observed feature distribution.

  10. The effects of visual realism on search tasks in mixed reality simulation.

    PubMed

    Lee, Cha; Rincon, Gustavo A; Meyer, Greg; Höllerer, Tobias; Bowman, Doug A

    2013-04-01

    In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR.

  11. Visual illusions in predator-prey interactions: birds find moving patterned prey harder to catch.

    PubMed

    Hämäläinen, Liisa; Valkonen, Janne; Mappes, Johanna; Rojas, Bibiana

    2015-09-01

    Several antipredator strategies are related to prey colouration. Some colour patterns can create visual illusions during movement (such as motion dazzle), making it difficult for a predator to capture moving prey successfully. Experimental evidence about motion dazzle, however, is still very scarce and comes only from studies using human predators capturing moving prey items in computer games. We tested a motion dazzle effect using for the first time natural predators (wild great tits, Parus major). We used artificial prey items bearing three different colour patterns: uniform brown (control), black with elongated yellow pattern and black with interrupted yellow pattern. The last two resembled colour patterns of the aposematic, polymorphic dart-poison frog Dendrobates tinctorius. We specifically tested whether an elongated colour pattern could create visual illusions when combined with straight movement. Our results, however, do not support this hypothesis. We found no differences in the number of successful attacks towards prey items with different patterns (elongated/interrupted) moving linearly. Nevertheless, both prey types were significantly more difficult to catch compared to the uniform brown prey, indicating that both colour patterns could provide some benefit for a moving individual. Surprisingly, no effect of background (complex vs. plain) was found. This is the first experiment with moving prey showing that some colour patterns can affect avian predators' ability to capture moving prey, but the mechanisms lowering the capture rate are still poorly understood.

  12. Inhibitory guidance in visual search: the case of movement-form conjunctions.

    PubMed

    Dent, Kevin; Allen, Harriet A; Braithwaite, Jason J; Humphreys, Glyn W

    2012-02-01

    We used a probe-dot procedure to examine the roles of excitatory attentional guidance and distractor suppression in search for movement-form conjunctions. Participants in Experiment 1 completed a conjunction (moving X amongst moving Os and static Xs) and two single-feature (moving X amongst moving Os, and static X amongst static Os) conditions. "Active" participants searched for the target, whereas "passive" participants viewed the displays without responding. Subsequently, both groups located (left or right) a probe dot appearing in either an occupied or an unoccupied location. In the conjunction condition, the active group located probes presented on static distractors more slowly than probes presented on moving distractors, reversing the direction of the difference found within the passive group. This disadvantage for probes on static items was much stronger in conjunction than in single-feature search. The same pattern of results was replicated in Experiment 2, which used a go/no-go procedure. Experiment 3 extended the go/no-go procedure to the case of search for a static target and revealed increased probe localisation times as a consequence of active search, primarily for probes on moving distractor items. The results demonstrated attentional guidance by inhibition of distractors in conjunction search. PMID:22095256

  13. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  14. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-01

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. PMID:26156982

  15. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  16. Visual search strategies of baseball batters: eye movements during the preparatory phase of batting.

    PubMed

    Kato, Takaaki; Fukuda, Tadahiko

    2002-04-01

    The aim of this study was to analyze visual search strategies of baseball batters during the viewing period of the pitcher's motion. The 18 subjects were 9 experts and 9 novices. While subjects viewed a videotape which, from a right-handed batter's perspective, showed a pitcher throwing a series of 10 types of pitches, their eye movements were measured and analyzed. Novices moved their eyes faster than experts, and the distribution area of viewing points was also wider than that of the experts. The viewing duration of experts of the pitching arm was longer than those of novices during the last two pitching phases. These results indicate that experts set their visual pivot on the pitcher's elbow and used peripheral vision properties to evaluate the pitcher's motion and the ball trajectory.

  17. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.

  18. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  19. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  20. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  1. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  2. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  3. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  4. Paying Attention: Being a Naturalist and Searching for Patterns.

    ERIC Educational Resources Information Center

    Weisberg, Saul

    1996-01-01

    Discusses the importance of recognizing patterns in nature to help understand the interactions of living and non-living things. Cautions the student not to lose sight of the details when studying the big picture. Encourages development of the ability to identify local species. Suggest two activities to strengthen observation skills and to help in…

  5. Intelligent technique to search for patterns within images in massive databases.

    PubMed

    Vega, J; Murari, A; Pereira, A; Portas, A; Castro, P

    2008-10-01

    An image retrieval system for JET has been developed. The image database contains the images of the JET high speed visible camera. The system input is a pattern selected inside an image and the output is the group of frames (defined by their discharge numbers and time slices) that show patterns similar to the selected one. This approach is based on morphological pattern recognition and it should be emphasized that the pattern is found independently of its location in the frame. The technique encodes images into characters and, therefore, it transforms the pattern search into a character-matching problem.

  6. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  7. Looking back at the stare-in-the-crowd effect: staring eyes do not capture attention in visual search.

    PubMed

    Cooper, Robbie M; Law, Anna S; Langton, Stephen R H

    2013-05-17

    The stare-in-the crowd effect refers to the finding that a visual search for a target of staring eyes among averted-eyes distracters is more efficient than the search for an averted-eyes target among staring distracters. This finding could indicate that staring eyes are prioritized in the processing of the search array so that attention is more likely to be directed to their location than to any other. However, visual search is a complex process, which not only depends upon the properties of the target, but also the similarity between the target of the search and the distractor items and between the distractor items themselves. Across five experiments, we show that the search asymmetry diagnostic of the stare-in-the-crowd effect is more likely to be the result of a failure to control for the similarity among distracting items between the two critical search conditions rather than any special attention-grabbing property of staring gazes. Our results suggest that, contrary to results reported in the literature, staring gazes are not prioritized by attention in visual search.

  8. Multimodal neuroimaging evidence linking memory and attention systems during visual search cued by context.

    PubMed

    Kasper, Ryan W; Grafton, Scott T; Eckstein, Miguel P; Giesbrecht, Barry

    2015-03-01

    Visual search can be facilitated by the learning of spatial configurations that predict the location of a target among distractors. Neuropsychological and functional magnetic resonance imaging (fMRI) evidence implicates the medial temporal lobe (MTL) memory system in this contextual cueing effect, and electroencephalography (EEG) studies have identified the involvement of visual cortical regions related to attention. This work investigated two questions: (1) how memory and attention systems are related in contextual cueing; and (2) how these systems are involved in both short- and long-term contextual learning. In one session, EEG and fMRI data were acquired simultaneously in a contextual cueing task. In a second session conducted 1 week later, EEG data were recorded in isolation. The fMRI results revealed MTL contextual modulations that were correlated with short- and long-term behavioral context enhancements and attention-related effects measured with EEG. An fMRI-seeded EEG source analysis revealed that the MTL contributed the most variance to the variability in the attention enhancements measured with EEG. These results support the notion that memory and attention systems interact to facilitate search when spatial context is implicitly learned. PMID:25586959

  9. Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search.

    PubMed

    Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

    2014-10-01

    People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject

  10. QuasiMotiFinder: protein annotation by searching for evolutionarily conserved motif-like patterns.

    PubMed

    Gutman, Roee; Berezin, Carine; Wollman, Roy; Rosenberg, Yossi; Ben-Tal, Nir

    2005-07-01

    Sequence signature databases such as PROSITE, which include amino acid segments that are indicative of a protein's function, are useful for protein annotation. Lamentably, the annotation is not always accurate. A signature may be falsely detected in a protein that does not carry out the associated function (false positive prediction, FP) or may be overlooked in a protein that does carry out the function (false negative prediction, FN). A new approach has emerged in which a signature is replaced with a sequence profile, calculated based on multiple sequence alignment (MSA) of homologous proteins that share the same function. This approach, which is superior to the simple pattern search, essentially searches with the sequence of the query protein against an MSA library. We suggest here an alternative approach, implemented in the QuasiMotiFinder web server (http://quasimotifinder.tau.ac.il/), which is based on a search with an MSA of homologous query proteins against the original PROSITE signatures. The explicit use of the average evolutionary conservation of the signature in the query proteins significantly reduces the rate of FP prediction compared with the simple pattern search. QuasiMotiFinder also has a reduced rate of FN prediction compared with simple pattern searches, since the traditional search for precise signatures has been replaced by a permissive search for signature-like patterns that are physicochemically similar to known signatures. Overall, QuasiMotiFinder and the profile search are comparable to each other in terms of performance. They are also complementary to each other in that signatures that are falsely detected in (or overlooked by) one may be correctly detected by the other.

  11. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  12. A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search

    SciTech Connect

    Hart, W.E.

    1999-04-22

    The authors present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R{sup n}: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. They show that EPSAs can be cast as stochastic pattern search methods, and they use this observation to prove that EpSAs have a probabilistic weak stationary point convergence theory. This work provides the first convergence analysis for a class of evolutionary algorithms that guarantees convergence almost surely to a stationary point of a nonconvex objective function.

  13. Visual Search Strategies of Soccer Players Executing a Power vs. Placement Penalty Kick

    PubMed Central

    Timmis, Matthew A.; Turner, Kieran; van Paridon, Kjell N.

    2014-01-01

    Introduction When taking a soccer penalty kick, there are two distinct kicking techniques that can be adopted; a ‘power’ penalty or a ‘placement’ penalty. The current study investigated how the type of penalty kick being taken affected the kicker’s visual search strategy and where the ball hit the goal (end ball location). Method Wearing a portable eye tracker, 12 university footballers executed 2 power and placement penalty kicks, indoors, both with and without the presence of a goalkeeper. Video cameras were used to determine initial ball velocity and end ball location. Results When taking the power penalty, the football was kicked significantly harder and more centrally in the goal compared to the placement penalty. During the power penalty, players fixated on the football for longer and more often at the goalkeeper (and by implication the middle of the goal), whereas in the placement penalty, fixated longer at the goal, specifically the edges. Findings remained consistent irrespective of goalkeeper presence. Discussion/conclusion Findings indicate differences in visual search strategy and end ball location as a function of type of penalty kick. When taking the placement penalty, players fixated and kicked the football to the edges of the goal in an attempt to direct the ball to an area that the goalkeeper would have difficulty reaching and saving. Fixating significantly longer on the football when taking the power compared to placement penalty indicates a greater importance of obtaining visual information from the football. This can be attributed to ensuring accurate foot-to-ball contact and subsequent generation of ball velocity. Aligning gaze and kicking the football centrally in the goal when executing the power compared to placement penalty may have been a strategy to reduce the risk of kicking wide of the goal altogether. PMID:25517405

  14. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search

    PubMed Central

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  15. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search.

    PubMed

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-03-26

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery.

  16. Multichannel spatio-temporal topographic processing for visual search and navigation

    NASA Astrophysics Data System (ADS)

    Szatmari, Istvan; Balya, David; Timar, Gergely; Rekeczky, Csaba; Roska, Tamas

    2003-04-01

    In this paper a biologically motivated image flow processing mechanism is presented for visual exploration systems. The intention of this multi-channel topographic approach was to produce decision maps for salient feature localization and identification. As a unique biological study has recently confirmed mammalian visual systems process the world through a set of separate parallel channels and these representations are embodied in a stack of 'strata' in the retina. Beyond reflecting the biological motivations our main goal was to create an efficient algorithmic framework for real-life visual search and navigation experiments. In the course of this design the retinotopic processing scheme is embedded in an analogic Cellular Neural Network (CNN) algorithm where image flow is analyzed by temporal, spatial and spatio-temporal filters. The output of these sub-channels is then combined in a programmable configuration to form the new channel responses. In the core of the algorithm crisp or fuzzy logic strategies define the global channel interaction and result in a unique binary image flow. This processing mechanism of the algorithmic framework and the hardware architecture of the system are presented along with experimental ACE4k CNN chip results for several video flows recorded in flying vehicles.

  17. Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.

    PubMed

    Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen

    2016-01-01

    Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. PMID:26049037

  18. Landmark Based Shape Analysis for Cerebellar Ataxia Classification and Cerebellar Atrophy Pattern Visualization

    PubMed Central

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-01-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules. PMID:27303111

  19. Landmark based shape analysis for cerebellar ataxia classification and cerebellar atrophy pattern visualization

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Abulnaga, S. Mazdak; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar dysfunction can lead to a wide range of movement disorders. Studying the cerebellar atrophy pattern associated with different cerebellar disease types can potentially help in diagnosis, prognosis, and treatment planning. In this paper, we present a landmark based shape analysis pipeline to classify healthy control and different ataxia types and to visualize the characteristic cerebellar atrophy patterns associated with different types. A highly informative feature representation of the cerebellar structure is constructed by extracting dense homologous landmarks on the boundary surfaces of cerebellar sub-structures. A diagnosis group classifier based on this representation is built using partial least square dimension reduction and regularized linear discriminant analysis. The characteristic atrophy pattern for an ataxia type is visualized by sampling along the discriminant direction between healthy controls and the ataxia type. Experimental results show that the proposed method can successfully classify healthy controls and different ataxia types. The visualized cerebellar atrophy patterns were consistent with the regional volume decreases observed in previous studies, but the proposed method provides intuitive and detailed understanding about changes of overall size and shape of the cerebellum, as well as that of individual lobules.

  20. Visual pattern discrimination by population retinal ganglion cells' activities during natural movie stimulation.

    PubMed

    Zhang, Ying-Ying; Wang, Ru-Bin; Pan, Xiao-Chuan; Gong, Hai-Qing; Liang, Pei-Ji

    2014-02-01

    In the visual system, neurons often fire in synchrony, and it is believed that synchronous activities of group neurons are more efficient than single cell response in transmitting neural signals to down-stream neurons. However, whether dynamic natural stimuli are encoded by dynamic spatiotemporal firing patterns of synchronous group neurons still needs to be investigated. In this paper we recorded the activities of population ganglion cells in bullfrog retina in response to time-varying natural images (natural scene movie) using multi-electrode arrays. In response to some different brief section pairs of the movie, synchronous groups of retinal ganglion cells (RGCs) fired with similar but different spike events. We attempted to discriminate the movie sections based on temporal firing patterns of single cells and spatiotemporal firing patterns of the synchronous groups of RGCs characterized by a measurement of subsequence distribution discrepancy. The discrimination performance was assessed by a classification method based on Support Vector Machines. Our results show that different movie sections of the natural movie elicited reliable dynamic spatiotemporal activity patterns of the synchronous RGCs, which are more efficient in discriminating different movie sections than the temporal patterns of the single cells' spike events. These results suggest that, during natural vision, the down-stream neurons may decode the visual information from the dynamic spatiotemporal patterns of the synchronous group of RGCs' activities. PMID:24465283