Science.gov

Sample records for visual search patterns

  1. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  2. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  3. Understanding visual search patterns of dermatologists assessing pigmented skin lesions before and after online training.

    PubMed

    Krupinski, Elizabeth A; Chao, Joseph; Hofmann-Wellenhof, Rainer; Morrison, Lynne; Curiel-Lewandrowski, Clara

    2014-12-01

    The goal of this investigation was to explore the feasibility of characterizing the visual search characteristics of dermatologists evaluating images corresponding to single pigmented skin lesions (PSLs) (close-ups and dermoscopy) as a venue to improve training programs for dermoscopy. Two Board-certified dermatologists and two dermatology residents participated in a phased study. In phase I, they viewed a series of 20 PSL cases ranging from benign nevi to melanoma. The close-up and dermoscopy images of the PSL were evaluated sequentially and rated individually as benign or malignant, while eye position was recorded. Subsequently, the participating subjects completed an online dermoscopy training module that included a pre- and post-test assessing their dermoscopy skills (phase 2). Three months later, the subjects repeated their assessment on the 20 PSLs presented during phase I of the study. Significant differences in viewing time and eye-position parameters were observed as a function of level of expertise. Dermatologists overall have more efficient search than residents generating fewer fixations with shorter dwells. Fixations and dwells associated with decisions changing from benign to malignant or vice versa from photo to dermatoscopic viewing were longer than any other decision, indicating increased visual processing for those decisions. These differences in visual search may have implications for developing tools to teach dermatologists and residents about how to better utilize dermoscopy in clinical practice. PMID:24939005

  4. Situating visual search.

    PubMed

    Nakayama, Ken; Martini, Paolo

    2011-07-01

    Visual search attracted great interest because its ease under certain circumstances seemed to provide a way to understand how properties of early visual cortical areas could explain complex perception without resorting to higher order psychological or neurophysiological mechanisms. Furthermore, there was the hope that properties of visual search itself might even reveal new cortical features or dimensions. The shortcomings of this perspective suggest that we abandon fixed canonical elementary particles of vision as well as a corresponding simple to complex cognitive architecture for vision. Instead recent research has suggested a different organization of the visual brain with putative high level processing occurring very rapidly and often unconsciously. Given this outlook, we reconsider visual search under the broad category of recognition tasks, each having different trade-offs for computational resources, between detail and scope. We conclude noting recent trends showing how visual search is relevant to a wider range of issues in cognitive science, in particular to memory, decision making, and reward. PMID:20837042

  5. Introspection during visual search.

    PubMed

    Reyes, Gabriel; Sackur, Jérôme

    2014-10-01

    Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. PMID:25286130

  6. Supporting Web Search with Visualization

    NASA Astrophysics Data System (ADS)

    Hoeber, Orland; Yang, Xue Dong

    One of the fundamental goals of Web-based support systems is to promote and support human activities on the Web. The focus of this Chapter is on the specific activities associated with Web search, with special emphasis given to the use of visualization to enhance the cognitive abilities of Web searchers. An overview of information retrieval basics, along with a focus on Web search and the behaviour of Web searchers is provided. Information visualization is introduced as a means for supporting users as they perform their primary Web search tasks. Given the challenge of visualizing the primarily textual information present in Web search, a taxonomy of the information that is available to support these tasks is given. The specific challenges of representing search information are discussed, and a survey of the current state-of-the-art in visual Web search is introduced. This Chapter concludes with our vision for the future of Web search.

  7. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    PubMed Central

    Bruce, Neil D. B.; Tsotsos, John K.

    2011-01-01

    In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries. PMID:21808617

  8. Visual search enhances subsequent mnemonic search.

    PubMed

    Westfall, Holly A; Malmberg, Kenneth J

    2013-02-01

    We examined how the performance of a visual search task while studying a list of to-be-remembered words affects subsequent memory for those words by humans. Previous research had suggested that episodic context encoding is facilitated when the study phase of a memory experiment requires, or otherwise encourages, a visual search for the to-be-remembered stimuli, and theta-band oscillations are more robust when animals are searching their environment. Moreover, hippocampal theta oscillations are positively correlated with learning in animals. We assumed that a visual search task performed during the encoding of words for a subsequent memory test would induce an exploratory state that would mimic the one that is induced in animals when performing exploratory activities in their environment, and that the encoding of episodic traces would be improved as a result. The results of several experiments indicated that the performance of the search task improved free recall, but the results did not extend to yes-no or forced choice recognition memory testing. We propose that visual search tasks enhance the encoding of episodic context information but do not enhance the encoding of to-be-remembered words. PMID:22961740

  9. Parallel Processing in Visual Search Asymmetry

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2004-01-01

    The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

  10. Amblyopic deficits in visual search.

    PubMed

    Goltz, Herbert; Tsirlin, Inna; Wong, Agnes

    2015-09-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated primarily with other low level visual deficits such as reduced contrast sensitivity at high spatial frequencies and increased visual crowding. Research in the last decade demonstrated that amblyopia is also linked to higher-level deficits in global shape and motion perception and in contour integration. Deficits in visual attention have also been shown in counting briefly presented items, tracking objects and identifying items presented in rapid succession. Here, we demonstrated that amblyopia causes more general attentional deficits as manifested in visual search. We compared the performance of subjects with amblyopia (n=9) to those of controls (n=12) on a feature search and a conjunction search with Gabor patches. Eye movements were recorded and controlled by continuous fixation. To account for the low level deficits inherent in amblyopia, we first measured each subjects' contrast and crowding thresholds and then presented the display elements at suprathreshold levels such that visibility was equalized across the control and the experimental groups. The effectiveness of these precautions at eliminating low-level deficits as a confounding factor was confirmed by the results of the feature search task. Performance on this "pop-out" search, considered to be pre-attentive, was not significantly different between the two groups. In contrast, reaction times on conjunction search, a task requiring the engagement of visual attention, were significantly greater (by as much as 400 msec) in amblyopic eyes than in control or fellow eyes. Taken together, these data suggest that amblyopia is linked to greater and more generalized attentional deficits than previously known. Visual search is a necessary, basic component of everyday functioning and these deficits may result in significant repercussions for people with amblyopia. Meeting abstract presented at VSS 2015. PMID:26326343

  11. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  12. Visual Search of Mooney Faces.

    PubMed

    Goold, Jessica E; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  13. Visual Search of Mooney Faces

    PubMed Central

    Goold, Jessica E.; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  14. Visual similarity effects in categorical search.

    PubMed

    Alexander, Robert G; Zelinsky, Gregory J

    2011-01-01

    We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. PMID:21757505

  15. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  16. Confirmation bias in visual search.

    PubMed

    Rajsic, Jason; Wilson, Daryl E; Pratt, Jay

    2015-10-01

    In a series of experiments, we investigated the ubiquity of confirmation bias in cognition by measuring whether visual selection is prioritized for information that would confirm a proposition about a visual display. We show that attention is preferentially deployed to stimuli matching a target template, even when alternate strategies would reduce the number of searches necessary. We argue that this effect is an involuntary consequence of goal-directed processing, and show that it can be reduced when ample time is provided to prepare for search. These results support the notion that capacity-limited cognitive processes contribute to the biased selection of information that characterizes confirmation bias. (PsycINFO Database Record PMID:26098120

  17. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  18. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  19. Statistical templates for visual search

    PubMed Central

    Ackermann, John F.; Landy, Michael S.

    2014-01-01

    How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a “template,” i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets. PMID:24627458

  20. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of the…

  1. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  2. Aurally and visually guided visual search in a virtual environment.

    PubMed

    Flanagan, P; McAnally, K I; Martin, R L; Meehan, J W; Oldfield, S R

    1998-09-01

    We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information. PMID:9849104

  3. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  4. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  5. Adult age differences in visual search.

    PubMed

    Mason, S E; Baskey, P; Perri, D

    1985-01-01

    The visual search technique was used to assess adult age differences in visual information extraction. The study included three adult age groups. In Experiment 1, participants searched for targets embedded in a list of unrelated words. Targets were defined structurally, phonemically, or semantically. Search for structural targets was faster than search for phonemic and semantic targets. This was true for all three age groups. In Experiment 2, targets were embedded in prose. The oldest age group required additional time to detect each target type, but the largest age difference was associated with semantic search. PMID:3830903

  6. Visual Search Across the Life Span

    ERIC Educational Resources Information Center

    Hommel, Bernhard; Li, Karen Z. H.; Li, Shu-Chen

    2004-01-01

    Gains and losses in visual search were studied across the life span in a representative sample of 298 individuals from 6 to 89 years of age. Participants searched for single-feature and conjunction targets of high or low eccentricity. Search was substantially slowed early and late in life, age gradients were more pronounced in conjunction than in…

  7. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  8. Cascade category-aware visual search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Huang, Qingming; Gao, Wen; Rui, Yong

    2014-06-01

    Incorporating image classification into image retrieval system brings many attractive advantages. For instance, the search space can be narrowed down by rejecting images in irrelevant categories of the query. The retrieved images can be more consistent in semantics by indexing and returning images in the relevant categories together. However, due to their different goals on recognition accuracy and retrieval scalability, it is hard to efficiently incorporate most image classification works into large-scale image search. To study this problem, we propose cascade category-aware visual search, which utilizes weak category clue to achieve better retrieval accuracy, efficiency, and memory consumption. To capture the category and visual clues of an image, we first learn category-visual words, which are discriminative and repeatable local features labeled with categories. By identifying category-visual words in database images, we are able to discard noisy local features and extract image visual and category clues, which are hence recorded in a hierarchical index structure. Our retrieval system narrows down the search space by: 1) filtering the noisy local features in query; 2) rejecting irrelevant categories in database; and 3) preforming discriminative visual search in relevant categories. The proposed algorithm is tested on object search, landmark search, and large-scale similar image search on the large-scale LSVRC10 data set. Although the category clue introduced is weak, our algorithm still shows substantial advantages in retrieval accuracy, efficiency, and memory consumption than the state-of-the-art. PMID:24760907

  9. Words, shape, visual search and visual working memory in 3-year-old children

    PubMed Central

    Vales, Catarina; Smith, Linda B.

    2014-01-01

    Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  10. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  11. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  12. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  13. Automatization and training in visual search.

    PubMed

    Czerwinski, M; Lightfoot, N; Shiffrin, R M

    1992-01-01

    In several search tasks, the amount of practice on particular combinations of targets and distractors was equated in varied-mapping (VM) and consistent-mapping (CM) conditions. The results indicate the importance of distinguishing between memory and visual search tasks, and implicate a number of factors that play important roles in visual search and its learning. Visual search was studied in Experiment 1. VM and CM performance were almost equal, and slope reductions occurred during practice for both, suggesting the learning of efficient attentive search based on features, and no important role for automatic attention attraction. However, positive transfer effects occurred when previous CM targets were re-paired with previous CM distractors, even though these targets and distractors had not been trained together. Also, the introduction of a demanding simultaneous task produced advantages of CM over VM. These latter two results demonstrated the operation of automatic attention attraction. Visual search was further studied in Experiment 2, using novel characters for which feature overlap and similarity were controlled. The design and many of the findings paralleled Experiment 1. In addition, enormous search improvement was seen over 35 sessions of training, suggesting the operation of perceptual unitization for the novel characters. Experiment 3 showed a large, persistent advantage for CM over VM performance in memory search, even when practice on particular combinations of targets and distractors was equated in the two training conditions. A multifactor theory of automatization and attention is put forth to account for these findings and others in the literature. PMID:1621883

  14. Perceptual Encoding Efficiency in Visual Search

    ERIC Educational Resources Information Center

    Rauschenberger, Robert; Yantis, Steven

    2006-01-01

    The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

  15. Feature correlation guidance in category visual search.

    PubMed

    Wu, Rachel; Pruitt, Zoe; Runkle, Megan; Meyer, Kristen; Scerif, Gaia; Aslin, Richard

    2015-09-01

    Compared to objects with uncorrelated features (e.g., jelly beans come in many colors), objects with correlated features (e.g., bananas tend to be yellow) enable more robust object and category representations (e.g., Austerweil & Griffiths, 2011; Wu et al., 2011; Younger & Cohen, 1986). It is unknown whether these more robust representations impact attentional templates (i.e., working memory representations guiding visual search). Adults participated in four visual search tasks (2x2 design) where targets were defined as either one item (a specific alien) or a category (any alien) with correlated features (e.g., circle belly shape, circle back spikes) or uncorrelated features (e.g., circle belly shape, triangle back spikes). We measured behavioral responses and the N2pc component, an event-related potential (ERP) marker for target selection. Behavioral responses were better for correlated items than uncorrelated items for both exemplar and category search. While the N2pc amplitude was larger for exemplar search compared to category search, the amplitude only differed based on feature correlation for category search: The N2pc was present for category search with correlated features, and not present in search for uncorrelated features. Our ERP results demonstrate that correlated (and not uncorrelated) features for novel categories provide a robust category representation that can guide visual search. Meeting abstract presented at VSS 2015. PMID:26326614

  16. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  17. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  18. Urban camouflage assessment through visual search and computational saliency

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2013-04-01

    We present a new method to derive a multiscale urban camouflage pattern from a given set of background image samples. We applied this method to design a camouflage pattern for a given (semi-arid) urban environment. We performed a human visual search experiment and a computational evaluation study to assess the effectiveness of this multiscale camouflage pattern relative to the performance of 10 other (multiscale, disruptive and monotonous) patterns that were also designed for deployment in the same operating theater. The results show that the pattern combines the overall lowest detection probability with an average mean search time. We also show that a frequency-tuned saliency metric predicts human observer performance to an appreciable extent. This computational metric can therefore be incorporated in the design process to optimize the effectiveness of camouflage patterns derived from a set of background samples.

  19. Driving forces in free visual search: An ethology.

    PubMed

    MacInnes, W Joseph; Hunt, Amelia R; Hilchey, Matthew D; Klein, Raymond M

    2014-02-01

    Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses. PMID:24385137

  20. Subsymmetries predict auditory and visual pattern complexity.

    PubMed

    Toussaint, Godfried T; Beltran, Juan F

    2013-01-01

    A mathematical measure of pattern complexity based on subsymmetries possessed by the pattern, previously shown to correlate highly with empirically derived measures of cognitive complexity in the visual domain, is found to also correlate significantly with empirically derived complexity measures of perception and production of auditory temporal and musical rhythmic patterns. Not only does the subsymmetry measure correlate highly with the difficulty of reproducing the rhythms by tapping after listening to them, but also the empirical measures exhibit similar behavior, for both the visual and auditory patterns, as a function of the relative number of subsymmetries present in the patterns. PMID:24494441

  1. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  2. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  3. Parallel and serial processes in visual search.

    PubMed

    Thornton, Thomas L; Gilden, David L

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue (e.g., J. M. Wolfe, 1998a, 1998b). However, the results remain largely inconclusive because the methodologies that have historically been used cannot make the necessary distinctions (J. Palmer, 1995; J. T. Townsend, 1972, 1974, 1990). In this article, the authors develop a rigorous procedure for deciding the scheduling problem in visual search by making improvements in both search methodology and data interpretation. The search method, originally used by A. H. C. van der Heijden (1975), generalizes the traditional single-target methodology by permitting multiple targets. Reaction times and error rates from 29 representative search studies were analyzed using Monte Carlo simulation. Parallel and serial models of attention were defined by coupling the appropriate sequential sampling algorithms to realistic constraints on decision making. The authors found that although most searches are conducted by a parallel limited-capacity process, there is a distinguishable search class that is serial. PMID:17227182

  4. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  5. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  6. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  7. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  8. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  9. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  10. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  11. Top-down visual search in Wimmelbild

    NASA Astrophysics Data System (ADS)

    Bergbauer, Julia; Tari, Sibel

    2013-03-01

    Wimmelbild which means "teeming figure picture" is a popular genre of visual puzzles. Abundant masses of small figures are brought together in complex arrangements to make one scene in a Wimmelbild. It is picture hunt game. We discuss what type of computations/processes could possibly underlie the solution of the discovery of figures that are hidden due to a distractive influence of the context. One thing for sure is that the processes are unlikely to be purely bottom-up. One possibility is to re-arrange parts and see what happens. As this idea is linked to creativity, there are abundant examples of unconventional part re-organization in modern art. A second possibility is to define what to look for. That is to formulate the search as a top-down process. We address top-down visual search in Wimmelbild with the help of diffuse distance and curvature coding fields.

  12. Parallel Mechanisms for Visual Search in Zebrafish

    PubMed Central

    Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

    2014-01-01

    Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

  13. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  14. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  15. An active visual search interface for Medline.

    PubMed

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan

    2007-01-01

    Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz. PMID:17951838

  16. When is stereopsis useful in visual search?

    PubMed

    Josephs, Emilie; Cain, Matthew; Hidalgo-Sotelo, Barbara; Cook, Gregory; Chang, Nelson; Ehinger, Krista; Oliva, Aude; Wolfe, Jeremy

    2015-09-01

    Does stereoscopic information improve visual search? We know that attention can be guided efficiently by stereopsis, for example, to the near target among far distractors (Nakayama and Silverman, 1986), but, when searching through a real scene, does it help if that scene is presented stereoscopically? Certainly, scenes appear to be more vividly real in 3D. However, we present three experiments in which the addition of stereo did not alter scene search very much. In Experiment 1, 12 observers searched twice for 10 target objects in each of 18 photographic scenes (a 'repeated' search task). Note that observers were not cued to search for a target at a specific depth, stereopsis simply added to the vividness of the scene. Reaction time and accuracy did not differ significantly in stereoscopic and monoscopic conditions. In Experiment 2, using similar images, we found no differences between 2D and 3D conditions on time to first fixation on the target or on the average length of saccades. However, gaze durations were significantly shorter in 3D scenes. Since gaze durations are typically taken to measure processing time, this may suggest that it was easier to disambiguate surfaces and/or objects in 3D, although this advantage did not translate to benefits in search times. In a final experiment, we reduced the stimulus array to a set of colored rendered objects distributed in depth against a plain background. In this task, the addition of stereo produced shorter reaction times, even though stereo information was not predictive of target location. Our real scenes may have contained such a rich array of cues to target location that the addition of stereo may not have contributed much additional information. It may be in more difficult searches, including more challenging real world tasks, that stereopsis will be an asset. Meeting abstract presented at VSS 2015. PMID:26327049

  17. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines. PMID:26356887

  18. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology. PMID:19004680

  19. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  20. Adaptation and visual search in mammographic images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2015-05-01

    Radiologists face the visually challenging task of detecting suspicious features within the complex and noisy backgrounds characteristic of medical images. We used a search task to examine whether the salience of target features in x-ray mammograms could be enhanced by prior adaptation to the spatial structure of the images. The observers were not radiologists, and thus had no diagnostic training with the images. The stimuli were randomly selected sections from normal mammograms previously classified with BIRADS Density scores of "fatty" versus "dense," corresponding to differences in the relative quantities of fat versus fibroglandular tissue. These categories reflect conspicuous differences in visual texture, with dense tissue being more likely to obscure lesion detection. The targets were simulated masses corresponding to bright Gaussian spots, superimposed by adding the luminance to the background. A single target was randomly added to each image, with contrast varied over five levels so that they varied from difficult to easy to detect. Reaction times were measured for detecting the target location, before or after adapting to a gray field or to random sequences of a different set of dense or fatty images. Observers were faster at detecting the targets in either dense or fatty images after adapting to the specific background type (dense or fatty) that they were searching within. Thus, the adaptation led to a facilitation of search performance that was selective for the background texture. Our results are consistent with the hypothesis that adaptation allows observers to more effectively suppress the specific structure of the background, thereby heightening visual salience and search efficiency. PMID:25720760

  1. Implied action affordance facilitates visual search.

    PubMed

    Gomez, Michael; Snow, Jacqueline

    2015-09-01

    Although numerous studies have explored the influence of object affordances on perception, it is usually the case that only one or two items are depicted and the affordance-related stimuli differ markedly from low-affordance exemplars. Here we examined whether visual search for stimuli that imply action is superior to non-affordance-related images using more cluttered visual arrays where the stimuli are closely matched for color, luminance, and contrast. The search displays contained greyscale objects that implied action, or 'reconfigured' versions of the same stimuli that did not imply action. Search performance was examined in Experiment 1 using door levers, and in Experiment 2 using forks. Reconfigured stimuli were created by digitally rotating one component of the functional end of the object (i.e., the door lever fulcrum / fork prongs) into a spatial configuration that interfered with implied functionality. The stimuli were presented briefly in a 2 x 2 grid. In half of the trials, the stimuli were identical (target absent trials); in the remaining trials one item (the target) was presented with the handle in a reversed left/right orientation from the remaining 'distractors' (target present trials) in the grid. Right-handed observers were asked to make a speeded target present / absent decision. In Experiment 1, intact door lever targets were detected faster than their reconfigured counterparts. In Experiment 2, target detection accuracy was more accurate for intact fork targets than their reconfigured counterparts. In both experiments, observers were faster to detect oddball targets in which the 'functional' or 'reconfigured' end was oriented towards the right than the left, and this effect was strongest for intact over reconfigured stimulus arrays. Taken together, our findings demonstrate that target search is facilitated for objects that imply action than those that do not, and that search is most efficient when an object's functional end is rightward-oriented. Meeting abstract presented at VSS 2015. PMID:26326755

  2. Pre-exposure of repeated search configurations facilitates subsequent contextual cuing of visual search.

    PubMed

    Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R

    2015-03-01

    Contextual cuing is the enhancement of visual search when the configuration of distractors has been experienced previously. It has been suggested that contextual cuing relies on associative learning between the distractor locations and the target position. Four experiments examined the effect of pre-exposing configurations of consistent distractors on subsequent contextual cuing. The findings demonstrate a facilitation of subsequent cuing for pre-exposed configurations compared to novel configurations that have not been pre-exposed. This facilitation suggests that learning of repeated visual search patterns involves acquisition of not just distractor-target associations but also associations between distractors within the search context, an effect that is not captured by the Brady and Chun (2007) connectionist model of contextual cuing. We propose a new connectionist model of contextual cuing that learns associations between repeated distractor stimuli, enabling it to predict an effect of pre-exposure on contextual cuing. PMID:24999706

  3. Fraction Patterns--Visual and Numerical.

    ERIC Educational Resources Information Center

    Bennett, Albert B., Jr.

    1989-01-01

    A visual model of fractions, the tower of bars, is used to discover patterns. Examples include equalities, inequalities, sums of unit fractions, sums of differences, symmetry, and differences and products. Infinite sequences of numbers, infinite series, and concepts of limits can be introduced. (DC)

  4. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  5. Visual search from lab to clinic and back

    NASA Astrophysics Data System (ADS)

    Wolfe, Jeremy M.

    2014-03-01

    Many of the tasks of medical image perception can be understood as demanding visual search tasks (especially if you happen to be a visual search researcher). Basic research on visual search can tell us quite a lot about how medical image search tasks proceed because even experts have to use the human "search engine" with all its limitations. Humans can only deploy attention to one or a very few items at any one time. Human search is "guided" search. Humans deploy their attention to likely target objects on the basis of the basic visual features of object and on the basis of an understanding of the scene containing those objects. This guidance operates in medical images as well as in the mundane scenes of everyday life. The paper reviews some of the dialogue between medical image perception by experts and visual search as studied in the laboratory.

  6. Long-term visual search: Examining trial-by-trial learning over extended visual search experiences.

    PubMed

    Ericson, Justin; Biggs, Adam; Winkle, Jonathan; Gancayco, Christina; Mitroff, Stephen

    2015-09-01

    Airport security personnel search for a large number of prohibited items that vary in size, shape, color, category-membership, and more. This highly varied search set creates challenges for search accuracy, including how searchers are trained in identifying a myriad of potential targets. This challenge has both practical and theoretical implications (i.e., determining how best to obtain high accuracy, and how large memory sets interact with visual search performance, respectively). Recent research on "hybrid visual and memory search" (e.g., Wolfe, 2012) has begun to address such issues, but many questions remain. The current study addressed a difficult problem for traditional laboratory-based research-how does trial-by-trial learning develop over time for a large number of target types? This issue, which we call "long-term visual search," is key for understanding how reoccurring information in retained in memory so that it can aid future searches. Through the use of "big data" from the mobile application Airport Scanner (Kedlin Co.), it is possible to address such previously intractable questions. Airport Scanner is a game where players serve as an airport security officers looking for prohibited items in simulated bags. The game has over 7 million downloads and provides a powerful tool for psychological research (Mitroff et al., 2014 JEP:HPP). Trial-by-trial learning for multiple different targets was addressed by analyzing data from 50,000 participants. Distinct learning curves for each specific target revealed that accuracy rises asymptotically across trials without deteriorating to initially low starting levels. Additionally, an investigation into the number of to-be-searched-for target items indicated that performance accuracy remained high even as the memorized set size increased. The results suggest that items stored in memory generate their own item-specific template that is reinforced from repeated exposures. These findings offer insight into how novices develop into experts at target detection over the course of training. Meeting abstract presented at VSS 2015. PMID:26326796

  7. Visual abstraction of complex motion patterns

    NASA Astrophysics Data System (ADS)

    Janetzko, Halldór; Jäckle, Dominik; Deussen, Oliver; Keim, Daniel A.

    2013-12-01

    Today's tracking devices allow high spatial and temporal resolutions and due to their decreasing size also an ever increasing number of application scenarios. However, understanding motion over time is quite difficult as soon as the resulting trajectories are getting complex. Simply plotting the data may obscure important patterns since trajectories over long time periods often include many revisits of the same place which creates a high degree of over-plotting. Furthermore, important details are often hidden due to a combination of large-scale transitions with local and small-scale movement patterns. We present a visualization and abstraction technique for such complex motion data. By analyzing the motion patterns and displaying them with visual abstraction techniques a synergy of aggregation and simplification is reached. The capabilities of the method are shown in real-world applications for tracked animals and discussed with experts from biology. Our proposed abstraction techniques reduce visual clutter and help analysts to understand the movement patterns that are hidden in raw spatiotemporal data.

  8. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  9. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  10. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  11. Visual search and eye movements in novel and familiar contexts

    NASA Astrophysics Data System (ADS)

    McDermott, Kyle; Mulligan, Jeffrey B.; Bebis, George; Webster, Michael A.

    2006-02-01

    Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.

  12. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  13. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  14. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  15. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  16. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  17. The Time Course of Similarity Effects in Visual Search

    ERIC Educational Resources Information Center

    Guest, Duncan; Lamberts, Koen

    2011-01-01

    It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks…

  18. Spatiotemporal Segregation in Visual Search: Evidence from Parietal Lesions

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Humphreys, Glyn W.

    2004-01-01

    The mechanisms underlying segmentation and selection of visual stimuli over time were investigated in patients with posterior parietal damage. In a modified visual search task, a preview of old objects preceded search of a new set for a target while the old items remained. In Experiment 1, control participants ignored old and prioritized new…

  19. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  20. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  1. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  2. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  3. System reconfiguration, not resource depletion, determines the efficiency of visual search.

    PubMed

    Di Lollo, Vincent; Smilek, Daniel; Kawahara, Jun-Ichiro; Ghorashi, S M Shahab

    2005-08-01

    We examined two theories of visual search: resource depletion, grounded in a static, built-in brain architecture, with attention seen as a limited depletable resource, and system reconfiguration, in which the visual system is dynamically reconfigured from moment to moment so as to optimize performance on the task at hand. In a dual-task paradigm, a search display was preceded by a visual discrimination task and was followed, after a stimulus onset asynchrony (SOA) governed by a staircase procedure, by a pattern mask. Search efficiency, as indexed by the slope of the function relating critical SOA to number of distractors, was impaired under dual-task conditions for tasks that were performed efficiently (shallow search slope) when done singly, but not for tasks performed inefficiently (steep slope) when done singly. These results are consistent with system reconfiguration, but not with resource depletion, models and point to a dynamic, rather than a static, architecture of the visual system. PMID:16396015

  4. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly. PMID:23959603

  5. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

  6. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  7. Memory is Necessary in Visual Search with Limited Guidance.

    PubMed

    Peltier, Chad; Becker, Mark

    2015-09-01

    There has been an ongoing debate in the visual search literature on whether or not visual search has memory. One manipulation to test if memory is used in visual search has been to randomize the location of stimuli in an image every 111ms, which prevents observers from tracking the locations of previously inspected items. Horowitz and Wolfe (1998) used this method and found no significant differences in search slopes between static and random conditions, leading to the conclusion that visual search has no memory. Here we revisit this claim. We reason that memory in search should only be necessary for a search where there is little guidance. Thus search may appear memoryless when the search task allows for adequate guidance, but search may rely on memory in more difficult search tasks when guidance is ineffective. In Experiment 1 we replicated Horowitz and Wolfe's findings when observers searched for a T among Ls. But when we made the task a search for the same T among offset Ls observers in the random presentation condition were very close to chance performance, making it difficult to interpret search slopes. However, error rates suggest that presentation type (static v. random) interacted with stimulus type (easy v. hard) suggesting a role for memory with the harder search. In Experiment 2 we sought to increase overall accuracy to avoid chance performance. We decreased the set sizes in Experiment 2 from 8, 12, and 16, to 4, 6, and 8, while increasing the stimulus presentation duration from 111 to 160ms. Again we found that poorer accuracy in the difficult stimuli condition is moderated by the presentation type. The data suggest that memory is not necessary in searches where guidance to the target is efficient, but memory is necessary for high performance in searches with limited guidance. Meeting abstract presented at VSS 2015. PMID:26327045

  8. Operator-centric design patterns for information visualization software

    NASA Astrophysics Data System (ADS)

    Xie, Zaixian; Guo, Zhenyu; Ward, Matthew O.; Rundensteiner, Elke A.

    2010-01-01

    Design patterns have proven to be a useful means to make the process of designing, developing, and reusing software systems more efficient. In the area of information visualization, researchers have proposed design patterns for different functional components of the visualization pipeline. Since many visualization techniques need to display derived data as well as raw data, the data transformation stage is very important in the pipeline, yet existing design patterns are, in general, not sufficient to implement these data transformation techniques. In this paper, we propose two design patterns, operatorcentric transformation and data modifier, to facilitate the design of data transformations for information visualization systems. The key idea is to use operators to describe the data derivation and introduce data modifiers to represent the derived data. We also show that many interaction techniques can be regarded as operators as defined here, thus these two design patterns could support a wide range of visualization techniques. In addition, we describe a third design pattern, modifier-based visual mapping, that can generate visual abstraction via linking data modifiers to visual attributes. We also present a framework based on these three design patterns that supports coordinated multiple views. Several examples of multivariate visualizations are discussed to show that our design patterns and framework can improve the reusability and extensibility of information visualization systems. Finally, we explain how we have ported an existing visualization tool (XmdvTool) from its old data-centric structure to a new structure based on the above design patterns and framework.

  9. Visual search using realistic camouflage: countershading is highly effective at deterring search.

    PubMed

    Penacchio, Olivier; Lovell, George; Sanghera, Simon; Cuthill, Innes; Ruxton, Graeme; Harris, Julie

    2015-09-01

    One of the most widespread patterns of colouration in the animal kingdom is countershading, a gradation of colour in which body parts that face a higher light intensity are darker. Countershading may help counterbalance the shadowing created by directional light, and, hence, reduce 3D object recognition via shape-from-shading. There is evidence that other animals, as well as humans, derive information on shape from shading. Here, we assessed experimentally the effect of optimising countershading camouflage on detection speed and accuracy, to explore whether countershading needs to be fine-tuned to achieve crypsis. We used a computational 3D world that included ecologically realistic lighting patterns. We defined 3D scenes with elliptical 'distractor' leaves and an ellipsoid target object. The scenes were rendered with different types of illumination and the target objects were endowed with different levels of camouflage: none at all, a countershading pattern optimized for the light distribution of the scene and target orientation in space, or optimized for a different illuminant. Participants (N=12) were asked to detect the target 3D object in the scene as fast as possible. The results showed a very significant effect of countershading camouflage on detection rate and accuracy. The extent to which the countershading pattern departed from the optimal pattern for the actual lighting condition and orientation of the target object had a strong effect on detection performance. This experiment showed that appropriate countershading camouflage strongly interferes with visual search by decreasing detection rate and accuracy. A field predation experiment using birds, based on similar stimuli, showed similar effects. Taken together, this suggests that countershading obstructs efficient visual search across species and reduces visibility, thus enhancing survival in prey animals that adopt it. Meeting abstract presented at VSS 2015. PMID:26326656

  10. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  11. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  12. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  13. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  14. Visual search in scenes involves selective and nonselective pathways.

    PubMed

    Wolfe, Jeremy M; Võ, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2011-02-01

    How does one find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This article argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes might be best explained by a dual-path model: a 'selective' path in which candidate objects must be individually selected for recognition and a 'nonselective' path in which information can be extracted from global and/or statistical information. PMID:21227734

  15. Visual pattern discovery in timed event data

    NASA Astrophysics Data System (ADS)

    Schaefer, Matthias; Wanner, Franz; Mansmann, Florian; Scheible, Christian; Stennett, Verity; Hasselrot, Anders T.; Keim, Daniel A.

    2011-01-01

    Business processes have tremendously changed the way large companies conduct their business: The integration of information systems into the workflows of their employees ensures a high service level and thus high customer satisfaction. One core aspect of business process engineering are events that steer the workflows and trigger internal processes. Strict requirements on interval-scaled temporal patterns, which are common in time series, are thereby released through the ordinal character of such events. It is this additional degree of freedom that opens unexplored possibilities for visualizing event data. In this paper, we present a flexible and novel system to find significant events, event clusters and event patterns. Each event is represented as a small rectangle, which is colored according to categorical, ordinal or intervalscaled metadata. Depending on the analysis task, different layout functions are used to highlight either the ordinal character of the data or temporal correlations. The system has built-in features for ordering customers or event groups according to the similarity of their event sequences, temporal gap alignment and stacking of co-occurring events. Two characteristically different case studies dealing with business process events and news articles demonstrate the capabilities of our system to explore event data.

  16. Do People Take Stimulus Correlations into Account in Visual Search?

    PubMed Central

    Bhardwaj, Manisha; van den Berg, Ronald

    2016-01-01

    In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks. PMID:26963498

  17. Coarse-to-fine eye movement strategy in visual search.

    PubMed

    Over, E A B; Hooge, I T C; Vlaskamp, B N S; Erkelens, C J

    2007-08-01

    Oculomotor behavior contributes importantly to visual search. Saccadic eye movements can direct the fovea to potentially interesting parts of the visual field. Ensuing stable fixations enables the visual system to analyze those parts. The visual system may use fixation duration and saccadic amplitude as optimizers for visual search performance. Here we investigate whether the time courses of fixation duration and saccade amplitude depend on the subject's knowledge of the search stimulus, in particular target conspicuity. We analyzed 65,000 saccades and fixations in a search experiment for (possibly camouflaged) military vehicles of unknown type and size. Mean saccade amplitude decreased and mean fixation duration increased gradually as a function of the ordinal saccade and fixation number. In addition we analyzed 162,000 saccades and fixations recorded during a search experiment in which the location of the target was the only unknown. Whether target conspicuity was constant or varied appeared to have minor influence on the time courses of fixation duration and saccade amplitude. We hypothesize an intrinsic coarse-to-fine strategy for visual search that is even used when such a strategy is not optimal. PMID:17617434

  18. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  19. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

  20. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  1. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  2. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  3. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  4. Coloured Overlays, Visual Discomfort, Visual Search and Classroom Reading.

    ERIC Educational Resources Information Center

    Tyrrell, Ruth; And Others

    1995-01-01

    States that 46 children aged 12-16 were shown a page of meaningless text covered with plastic overlays, including 7 that were various colors and 1 that was clear. Explains that each child selected the overlay that made reading easiest. Notes that children who read with a colored overlay complained of visual discomfort when they read without the…

  5. Emotional expressions and visual search efficiency: specificity and effects of anxiety symptoms.

    PubMed

    Olatunji, Bunmi O; Ciesielski, Bethany G; Armstrong, Thomas; Zald, David H

    2011-10-01

    Although facial expressions are thought to vary in their functional impact on perceivers, experimental demonstration of the differential effects of facial expressions on behavior are lacking. In the present study, we examined the effects of exposure to facial expressions on visual search efficiency. Participants (n = 31) searched for a target in a 12 location circle array after exposure to an angry, disgusted, fearful, happy, or neutral facial expression for 100 ms or 500 ms. Consistent with predictions, exposure to a fearful expression prior to visual search resulted in faster target identification compared to exposure to other facial expressions. The effects of other facial expressions on visual search did not differ from each other. The fear facilitating effect on visual search efficiency was observed at 500-ms but not at 100-ms presentations, suggesting a specific temporal course of the facilitation. Subsequent analysis also revealed that individual differences in fear of negative evaluation, trait anxiety, and obsessive-compulsive symptoms possess a differential pattern of association with visual search efficiency. The experimental and clinical implications of these findings are discussed. PMID:21517160

  6. Visual search speed is influenced by differences in shape arbitrariness.

    PubMed

    Leshinskaya, Anna; Caramazza, Alfonso

    2015-09-01

    We hypothesized that the visual system is particularly tuned to those features which correlate with behaviorally relevant dimensions, and compared a few possibilities: how man-made an object is (naturalness), how often an object is eaten (edibility), how often an object is manipulated (manipulability), and the degree to which an object's shape is arbitrary (shape arbitrariness; Prasada, 2001). For example, the shape of a hammer is more constrained by the kind of thing that it is (i.e., is less arbitrary) than the shape of a rock. Does variability in perceptual similarity among sets of small, inanimate objects correlate with behavioral ratings on any of these dimensions? We chose four sets (categories) of stimuli: manipulable artifacts (e.g., pens), non-manipulable artifacts (e.g., lamps), natural objects (e.g. pinecones), and fruits/vegetables. These categories did not differ on ratings of typicality, familiarity, internal details, and visual complexity, or in area, aspect ratio, contour variance, extent, spatial frequency, contrast, or luminance. Participants searched for a target image among distractors from the same or different category; they pressed a space bar when they found the target, and reported its location by clicking X's that replaced the original images. For each category pairing, the difference in search speeds between same- and different-category trials was taken as an index of perceptual dissimilarity, and correlated with distances in each dimension for each subject (representational similarity analysis). Neither manipulability nor edibility explained the pattern of perceptual dissimilarity among categories (all r< .1, t< 1,p>.3). Although naturalness explained some variance (r=.16, t=2.13, p=.048), shape arbitrariness explained more (r=.40, t=4.05, p< .0001). Visual features correlated with shape arbitrariness include shape symmetry and regularity, according to ratings(r=.74, p< .001). These results suggest that, among inanimate objects, the visual system may be particularly sensitive to the perceptual features correlated with the arbitrariness of their shapes. Meeting abstract presented at VSS 2015. PMID:26326853

  7. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  8. The effect of face inversion on the detection of emotional faces in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V

    2015-01-01

    Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms. PMID:25229360

  9. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  10. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. PMID:19186914

  11. Conjunctive visual search in individuals with and without mental retardation.

    PubMed

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our goal was to determine the effect of decreasing target-distracter disparities on visual search efficiency. Results showed that search rates for the group with mental retardation and the MA-matched comparisons were more negatively affected by decreasing disparities than were those of the CA-matched group. The group with mental retardation and the MA-matched group performed similarly on all tasks. Implications for theory and application are discussed. PMID:17181391

  12. Visual search and attention to faces in early infancy

    PubMed Central

    Frank, Michael C.; Amso, Dima; Johnson, Scott P.

    2013-01-01

    Newborn babies look preferentially at faces and face-like displays; yet over the course of their first year, much changes about both the way infants process visual stimuli and how they allocate their attention to the social world. Despite this initial preference for faces in restricted contexts, the amount that infants look at faces increases considerably in the first year. Is this development related to changes in attentional orienting abilities? We explored this possibility by showing 3-, 6-, and 9-month-olds engaging animated and live-action videos of social stimuli and additionally measuring their visual search performance with both moving and static search displays. Replicating previous findings, looking at faces increased with age; in addition, the amount of looking at faces was strongly related to the youngest infants’ performance in visual search. These results suggest that infants’ attentional abilities may be an important factor facilitating their social attention early in development. PMID:24211654

  13. Synaesthetic colours do not camouflage form in visual search

    PubMed Central

    Gheri, C; Chopping, S; Morgan, M.J

    2008-01-01

    One of the major issues in synaesthesia research is to identify the level of processing involved in the formation of the subjective colours experienced by synaesthetes: are they perceptual phenomena or are they due to memory and association learning? To address this question, we tested whether the colours reported by a group of grapheme-colour synaesthetes (previously studied in an functional magnetic resonance imaging experiment) influenced them in a visual search task. As well as using a condition where synaesthetic colours should have aided visual search, we introduced a condition where the colours experienced by synaesthetes would be expected to make them worse than controls. We found no evidence for differences between synaesthetes and normal controls, either when colours should have helped them or where they should have hindered. We conclude that the colours reported by our population of synaesthetes are not equivalent to perceptual signals, but arise at a cognitive level where they are unable to affect visual search. PMID:18182374

  14. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  15. How priming in visual search affects response time distributions: analyses with ex-Gaussian fits.

    PubMed

    Kristjánsson, Arni; Jóhannesson, Omar I

    2014-11-01

    Although response times (RTs) are the dependent measure of choice in the majority of studies of visual attention, changes in RTs can be hard to interpret. First, they are inherently ambiguous, since they may reflect a change in the central tendency or skew (or both) of a distribution. Second, RT measures may lack sensitivity, since meaningful changes in RT patterns may not be picked up if they reflect two or more processes having opposing influences on mean RTs. Here we describe RT distributions for repetition priming in visual search, fitting ex-Gaussian functions to RT distributions. We focus here on feature and conjunction search tasks, since priming effects in these tasks are often thought to reflect similar mechanisms. As expected, both tasks resulted in strong priming effects when target and distractor identities repeated, but a large difference between feature and conjunction search was also seen, in that the ? parameter (reflecting the standard deviation of the Gaussian component) was far more affected by search repetition in conjunction than in feature search. Although caution should clearly be used when particular parameter estimates are matched to specific functions or processes, our results suggest that analyses of RT distributions can inform theoretical accounts of priming in visual search tasks, in this case showing quite different repetition effects for the two differing search types, suggesting that priming in the two paradigms partly reflects different mechanisms. PMID:25073610

  16. Visual exploratory search of relationship graphs on smartphones.

    PubMed

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones' user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users' capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  17. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  18. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency. PMID:25971812

  19. Audio-Visual Stimulation Improves Visual Search Abilities in Hemianopia due to Childhood Acquired Brain Lesions.

    PubMed

    Tinelli, Francesca; Purpura, Giulia; Cioni, Giovanni

    2015-01-01

    Results obtained in both animal models and hemianopic patients indicate that sound, spatially and temporally coincident with a visual stimulus, can improve visual perception in the blind hemifield, probably due to activation of 'multisensory neurons', mainly located in the superior colliculus. In view of this evidence, a new rehabilitation approach, based on audiovisual stimulation of visual field, has been proposed, and applied in adults with visual field reduction due to unilateral brain lesions. So far, results have been very encouraging, with improvements in visual search abilities. Based on these findings, we have investigated the possibility of inducing long-lasting amelioration also in children with a visual deficit due to acquired brain lesions. Our results suggest that, in the absence of spontaneous recovery, audiovisual training can induce activation of visual responsiveness of the oculomotor system also in children and adolescents with acquired lesions and confirm the putatively important role of the superior colliculus (SC) in this process. PMID:26152056

  20. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks. PMID:26230367

  1. Perceptual basis of redundancy gains in visual pop-out search.

    PubMed

    Töllner, Thomas; Zehetleitner, Michael; Krummenacher, Joseph; Müller, Hermann J

    2011-01-01

    The redundant-signals effect (RSE) refers to a speed-up of RT when the response is triggered by two, rather than just one, response-relevant target elements. Although there is agreement that in the visual modality RSEs observed with dimensionally redundant signals originating from the same location are generated by coactive processing architectures, there has been a debate as to the exact stage(s)--preattentive versus postselective--of processing at which coactivation arises. To determine the origin(s) of redundancy gains in visual pop-out search, the present study combined mental chronometry with electrophysiological markers that reflect purely preattentive perceptual (posterior-contralateral negativity [PCN]), preattentive and postselective perceptual plus response selection-related (stimulus-locked lateralized readiness potential [LRP]), or purely response production-related processes (response-locked LRP). As expected, there was an RSE on target detection RTs, with evidence for coactivation. At the electrophysiological level, this pattern was mirrored by an RSE in PCN latencies, whereas stimulus-locked LRP latencies showed no RSE over and above the PCN effect. Also, there was no RSE on the response-locked LRPs. This pattern demonstrates a major contribution of preattentive perceptual processing stages to the RSE in visual pop-out search, consistent with parallel-coactive coding of target signals in multiple visual dimensions [Müller, H. J., Heller, D., & Ziegler, J. Visual search for singleton feature targets within and across feature dimensions. PMID:20044891

  2. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  3. PathSOM: a novel visual-spatial search strategy

    NASA Astrophysics Data System (ADS)

    Chen, Dingguo; Sethi, Ishwar K.

    2005-02-01

    In this paper, we propose an efficient similarity search system PathSOM that combines Self-Organizing Map (SOM) and Pathfinder Networks (PFNET). In the front end of the system, SOM is applied to cluster the original data vectors and construct a visual map of the data. The Pathfinder network then organizes the SOM map units in the form of a graph to yield a framework for an improved search to find the best matching map unit. The ability of PathSOM approach for efficient searches is demonstrated through well-known data sets.

  4. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  5. The role of memory for visual search in scenes

    PubMed Central

    Võ, Melissa Le-Hoa; Wolfe, Jeremy M.

    2014-01-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. While a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  6. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  7. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    Energy Science and Technology Software Center (ESTSC)

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.« less

  8. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    SciTech Connect

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens in the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.

  9. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  10. Comparing target detection errors in visual search and manually-assisted search.

    PubMed

    Solman, Grayden J F; Hickey, Kersondra; Smilek, Daniel

    2014-05-01

    Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. PMID:24554230

  11. Hemispatial neglect on visual search tasks in Alzheimer's disease.

    PubMed

    Mendez, M F; Cherrier, M M; Cymerman, J S

    1997-07-01

    Abnormal visual attention may underlie certain visuospatial difficulties in patients with Alzheimer's disease (AD). These patients have hypometabolism and neuropathology in parietal cortex. Given the role of parietal function for visuospatial attention, patients with AD may have relative hemispatial neglect masked by other cognitive disturbances. Fifteen patients with-to-moderate AD and 15 healthy elderly controls matched for age, sex, and education were compared on four measures of neglect the visual search of a complex picture, a letter cancellation task, the Schenkenberg line bisection test, and a computerized line bisection task. Compared with controls, the group with AD was significantly impaired overall in attending to left hemispace on both picture search (F[1,56] = 11.27, p < 0.05) and cancellation tasks (F[1,112] = 12.68, p < 0.01); however, a subgroup of patients with AD had disproportionate difficulty in attending to right hemispace. The performance of the groups did not differ on either of the line bisection tasks regardless of the hand used. In AD, hemispatial neglect on visual search tasks may relate to difficulty in disengaging attention or in visual exploration, as well as to the severity of the disease. Future investigations may implicate neglect in visually related deficits in AD, for example, the prominent difficulty with left turns on driving a car. PMID:9297714

  12. Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity.

    PubMed

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2016-02-01

    In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items. PMID:26055755

  13. Human Visual Search Performance for Camouflaged Targets.

    PubMed

    Matthews, Olivia; Liggins, Eric; Volonakis, Tim; Scott-Samuel, Nick; Baddeley, Roland; Cuthill, Innes

    2015-09-01

    There is a paucity of published systematic research investigating object detection within the military context. Here, we establish baseline human detection performance for five standard military issued camouflage patterns. Stimuli were drawn from a database of 1242 calibrated images of a mixed deciduous woodland environment in Bristol, UK. Images within this database were taken during daylight hours, in summer and contained a PASGT helmet, systematically positioned within each scene. Subjects (20) discriminated between the two image types in a temporal 2AFC task (500ms presentation for each interval), with the detection scenario being the percentage of instances participants correctly detected the target. Cueing (cued/not-cued to target location), colour (colour/greyscale) and distance from the observer (3.5/5/7.5m) were manipulated, as was helmet camouflage pattern. A Generalized Linear Mixed Model revealed significant interactions between all variables on participant performance, with greater accuracy when stimuli were in colour, and the target location was cued. There was also a clear ranking of patterns in terms of effectiveness of camouflage. We also compare the results with a computational model based on low-level vision, and eye tracking data, with encouraging results. Our methodology provides a controlled means of assessing any camouflage in any environment, and the potential to implement a machine vision solution to assessment. In this instance, we show differences in the effectiveness of existing solutions to the problem of camouflage, concealment and deception (CCD) on the battlefield. Funded by QinetiQ as part of Materials and Structures Low Observable Materials Research Programme. Meeting abstract presented at VSS 2015. PMID:26326852

  14. Attention during visual search: The benefit of bilingualism

    PubMed Central

    Friesen, Deanna C; Latman, Vered; Calvo, Alejandra; Bialystok, Ellen

    2015-01-01

    Aims and Objectives/Purpose/Research Questions Following reports showing bilingual advantages in executive control (EC) performance, the current study investigated the role of selective attention as a foundational skill that might underlie these advantages. Design/Methodology/Approach Bilingual and monolingual young adults performed a visual search task by determining whether a target shape was present amid distractor shapes. Task difficulty was manipulated by search type (feature or conjunction) and by the number and discriminability of the distractors. In feature searches, the target (e.g., green triangle) differed on a single dimension (e.g., color) from the distractors (e.g., yellow triangles); in conjunction searches, two types of distractors (e.g., pink circles and turquoise squares) each differed from the target (e.g., turquoise circle) on a single but different dimension (e.g., color or shape). Data and Analysis Reaction time and accuracy data from 109 young adults (53 monolinguals and 56 bilinguals) were analyzed using a repeated-measures analysis of variance. Group membership, search type, number and discriminability of distractors were the independent variables. Findings/Conclusions Participants identified the target more quickly in the feature searches, when the target was highly discriminable from the distractors and when there were fewer distractors. Importantly, although monolinguals and bilinguals performed equivalently on the feature searches, bilinguals were significantly faster than monolinguals in identifying the target in the more difficult conjunction search, providing evidence for better control of visual attention in bilinguals Originality Unlike previous studies on bilingual visual attention, the current study found a bilingual attention advantage in a paradigm that did not include a Stroop-like manipulation to set up false expectations. Significance/Implications Thus, our findings indicate that the need to resolve explicit conflict or overcome false expectations is unnecessary for observing a bilingual advantage in selective attention. Observing this advantage in a fundamental skill suggests that it may underlie higher order bilingual advantages in EC. PMID:26640399

  15. Visual Detection of Multi-Letter Patterns.

    ERIC Educational Resources Information Center

    Staller, Joshua D.; Lappin, Joseph S.

    1981-01-01

    In three experiments, this study addressed two basic questions about the detection of multiletter patterns: (1) How is the detection of a multiletter pattern related to the detection of its individual components? (2) How is the detection of a sequence of letters influenced by the observer's familiarity with that sequence? (Author/BW)

  16. Eye-Search: A web-based therapy that improves visual search in hemianopia.

    PubMed

    Ong, Yean-Hoon; Jacquin-Courtois, Sophie; Gorgoraptis, Nikos; Bays, Paul M; Husain, Masud; Leff, Alexander P

    2015-01-01

    Persisting hemianopia frequently complicates lesions of the posterior cerebral hemispheres, leaving patients impaired on a range of key activities of daily living. Practice-based therapies designed to induce compensatory eye movements can improve hemianopic patients' visual function, but are not readily available. We used a web-based therapy (Eye-Search) that retrains visual search saccades into patients' blind hemifield. A group of 78 suitable hemianopic patients took part. After therapy (800 trials over 11 days), search times into their impaired hemifield improved by an average of 24%. Patients also reported improvements in a subset of visually guided everyday activities, suggesting that Eye-Search therapy affects real-world outcomes. PMID:25642437

  17. Early activation of object names in visual search.

    PubMed

    Meyer, Antje S; Belke, Eva; Telling, Anna L; Humphreys, Glyn W

    2007-08-01

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention. PMID:17972738

  18. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search

    PubMed Central

    Müller, Notger G.; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  19. The Role of Visual Working Memory in the Control of Gaze during Visual Search

    PubMed Central

    Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The interactions among visual working memory (VWM), attention, and gaze control were investigated in a visual search task that was performed while a color was held in VWM for a concurrent discrimination task. In the search task, participants were required to foveate a cued item within a circular array of colored objects. During the saccade to the target, the array was sometimes rotated so that the eyes landed midway between the target object and an adjacent distractor object, necessitating a second saccade to foveate the target. When the color of the adjacent distractor matched a color being maintained in VWM, execution of this secondary saccade was impaired, indicating that the current contents of VWM bias saccade targeting mechanisms that ordinarily direct gaze toward target objects during visual search. PMID:19429970

  20. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  1. The long and the short of priming in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2015-07-01

    Memory affects visual search, as is particularly evident from findings that when target features are repeated from one trial to the next, selection is faster. Two views have emerged on the nature of the memory representations and mechanisms that cause these intertrial priming effects: independent feature weighting versus episodic retrieval of previous trials. Previous research has attempted to disentangle these views focusing on short term effects. Here, we illustrate that the episodic retrieval models make the unique prediction of long-term priming: biasing one target type will result in priming of this target type for a much longer time, well after the bias has disappeared. We demonstrate that such long-term priming is indeed found for the visual feature of color, but only in conjunction search and not in singleton search. Two follow-up experiments showed that it was the kind of search (conjunction versus singleton) and not the difficulty, that determined whether long-term priming occurred. Long term priming persisted unaltered for at least 200 trials, and could not be explained as the result of explicit strategy. We propose that episodic memory may affect search more consistently than previously thought, and that the mechanisms for intertrial priming may be qualitatively different for singleton and conjunction search. PMID:25832185

  2. How do Interruptions Impact Nurses’ Visual Scanning Patterns When Using Barcode Medication Administration Systems?

    PubMed Central

    He, Ze; Marquard, Jenna L.; Henneman, Philip L.

    2014-01-01

    While barcode medication administration (BCMA) systems have the potential to reduce medication errors, they may introduce errors, side effects, and hazards into the medication administration process. Studies of BCMA systems should therefore consider the interrelated nature of health information technology (IT) use and sociotechnical systems. We aimed to understand how the introduction of interruptions into the BCMA process impacts nurses’ visual scanning patterns, a proxy for one component of cognitive processing. We used an eye tracker to record nurses’ visual scanning patterns while administering a medication using BCMA. Nurses either performed the BCMA process in a controlled setting with no interruptions (n=25) or in a real clinical setting with interruptions (n=21). By comparing the visual scanning patterns between the two groups, we found that nurses in the interruptive environment identified less task-related information in a given period of time, and engaged in more information searching than information processing. PMID:25954449

  3. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The performance of this system can be improved over time as more training data are accumulated. This AI system has been integrated into the PALFA survey pipeline and has discovered six new pulsars to date.

  4. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The performance of this system can be improved over time as more training data are accumulated. This AI system has been integrated into the PALFA survey pipeline and has discovered six new pulsars to date.

  5. Pattern and Component Motion Responses in Mouse Visual Cortical Areas.

    PubMed

    Juavinett, Ashley L; Callaway, Edward M

    2015-06-29

    Spanning about 9 mm(2) of the posterior cortex surface, the mouse's small but organized visual cortex has recently gained attention for its surprising sophistication and experimental tractability. Though it lacks the highly ordered orientation columns of primates, mouse visual cortex is organized retinotopically and contains at least ten extrastriate areas that likely integrate more complex visual features via dorsal and ventral streams of processing. Extending our understanding of visual perception to the mouse model is justified by the evolving ability to interrogate specific neural circuits using genetic and molecular techniques. In order to probe the functional properties of the putative mouse dorsal stream, we used moving plaids, which demonstrate differences between cells that identify local motion (component cells) and those that integrate global motion of the plaid (pattern cells; Figure 1A;). In primates, there are sparse pattern cell responses in primate V1, but many more in higher-order regions; 25%-30% of cells in MT and 40%-60% in MST are pattern direction selective. We present evidence that mice have small numbers of pattern cells in areas LM and RL, while V1, AL, and AM are largely component-like. Although the proportion of pattern cells is smaller in mouse visual cortex than in primate MT, this study provides evidence that the organization of the mouse visual system shares important similarities to that of primates and opens the possibility of using mice to probe motion computation mechanisms. PMID:26073133

  6. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  7. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  8. The development of visual search in infants and very young children.

    PubMed

    Gerhardstein, Peter; Rovee-Collier, Carolyn

    2002-02-01

    In two experiments, 90 1- to 3-year-olds were trained in a new nonverbal task to touch a video screen that displayed a unique target resembling a popular television character. The target appeared among varying numbers of distractors that resembled another familiar television character and was either a uniquely colored shape (the feature search task) or a unique color-shape combination (the conjunction search task). Each correct response triggered a sound and produced four animated objects on the screen. Irrespective of age and experimental design (between-subjects or within-subjects), children's reaction time (RT) patterns resembled those obtained from adults in corresponding search tasks: The RT slope for feature search was flat and independent of distractor number, whereas the RT slope for conjunction search increased linearly with distractor number. These results extend visual search effects found with adults to infants and very young children and suggest that the basic perceptual processes underlying visual search are qualitatively invariant over ontogeny. PMID:11786009

  9. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  10. The Efficiency of a Visual Skills Training Program on Visual Search Performance.

    PubMed

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedba?, Lidia; Markiewicz, Miko?aj; Florkiewicz, Beata; Lubi?ski, Wojciech

    2015-06-27

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  11. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  12. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  13. The rise and fall of hybrid visual and memory search.

    PubMed

    Horowitz, Todd

    2015-09-01

    In hybrid search, observers search through arrays of visually presented items for any of a set of targets held in memory; think of looking on a store shelf for items on your grocery list, searching luggage x-rays for potential banned items, or chest x-rays for signs of cancer. As the size of the memory sets used in hybrid search increased, descriptions of the RT by set size function moved from linear (Shiffrin & Schneider, 1977) to logarithmic (Wolfe, 2012). In order to study hybrid search at larger set sizes, with greater resolution, and as a function of expertise, I utilized the Airport Scanner (Kedlin Co., www.airportscannergame.com) dataset (see Mitroff and Biggs 2014). Airport Scanner is a game for mobile devices, which simulates searching through baggage x-rays for threats under time constraints. Players move through five ranks as they acquire game expertise, from "Trainee" to "Elite". Critically, new items are added to the list of potential threats as the game progresses. Eliminating trainees to reduce potential learning effects, I analyzed only trials (bags) with a single target. This left 3,491,664 trials from 18,595 players. Memory set size(potential threats) ranged from 3 to 155 items. Visual set size had no influence on performance. For set sizes from 6-12, the logarithmic relationship held. However, across the full range of set sizes, at all expertise levels, reaction time was best described as a quadratic, rather than logarithmic, function of set size. For the low expertise groups, the curve opened upward, while at high expertise, it opened downward. These data suggest that encoding and retrieval strategies in hybrid search change in a complex fashion as the size of the memory set increases, and as observers gain more expertise with the task. Models developed for small set sizes may not generalize to realistically large set sizes. Meeting abstract presented at VSS 2015. PMID:26326800

  14. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  15. Functional neuroanatomy of visual search with differential attentional demands: an fMRI study.

    PubMed

    Kim, Kwang Ki; Eliassen, James C; Lee, Sang Kun; Kang, Eunjoo

    2012-09-26

    Visual search is characterized as efficient (RT independent of distractor number) or inefficient (RT increasing with distractor number). Our goal was to determine if any brain regions are differentially activated by the attentional demands of these two search modes. We used fMRI to examine activation patterns during search for a target among a radial array of several distractors that were manipulated to produce efficient or inefficient search. Distractors for inefficient search were either uniform or varied to manipulate difficulty due to perceptual priming. No brain regions were uniquely activated by efficient or inefficient search, although inefficient search generally produced greater activations. The main differences were found in clusters in the superior occipital and superior parietal regions, for which activations were substantially greater for inefficient search. Similar results were found for frontal regions, such as the inferior prefrontal, superior frontal, anterior insula, and supplementary eye field, as well as the right ventral lateral thalamus. For inefficient search, increasing task difficulty resulted in low accuracy, but no difference in RT or activations. A working memory task utilizing the same display and response mode, but not involving search, activated the same frontal-parietal network as inefficient search (more so for the more difficult inefficient condition). Thus, our results identify brain regions that are more heavily recruited under conditions of inefficient search, independent of task difficultly per se, probably due in part to attentional modulation involving demands of eye movements, working memory, and top-down controls, but do not reveal independent networks related to efficient and inefficient search. PMID:22889940

  16. Reading and Visual Search: A Developmental Study in Normal Children

    PubMed Central

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

  17. The influence of attention, learning, and motivation on visual search.

    PubMed

    Dodd, Michael D; Flowers, John H

    2012-01-01

    The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters. PMID:23437627

  18. Measuring search efficiency in complex visual search tasks: global and local clutter.

    PubMed

    Beck, Melissa R; Lohrenz, Maura C; Trafton, J Gregory

    2010-09-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. PMID:20853984

  19. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  20. Visual Acceleration Perception for Simple and Complex Motion Patterns

    PubMed Central

    Mueller, Alexandra S.; Timney, Brian

    2016-01-01

    Humans are able to judge whether a target is accelerating in many viewing contexts, but it is an open question how the motion pattern per se affects visual acceleration perception. We measured acceleration and deceleration detection using patterns of random dots with horizontal (simpler) or radial motion (more visually complex). The results suggest that we detect acceleration better when viewing radial optic flow than horizontal translation. However, the direction within each type of pattern has no effect on performance and observers detect acceleration and deceleration similarly within each condition. We conclude that sensitivity to the presence of acceleration is generally higher for more complex patterns, regardless of the direction within each type of pattern or the sign of acceleration. PMID:26901879

  1. Effects of correct and transformed visual feedback on rhythmic visuo-motor tracking: tracking performance and visual search behavior.

    PubMed

    Roerdink, M; Peper, C E; Beek, P J

    2005-06-01

    The effects of correct and transformed visual feedback on rhythmic unimanual visuo-motor tracking were examined, focusing on tracking performance (accuracy and stability) and visual search behavior. Twelve participants (reduced to 9 in the analyses) manually tracked an oscillating visual target signal in phase (by moving the hand in the same direction as the target signal) and in antiphase (by moving the hand in the opposite direction), while the frequency of the target signal was gradually increased to probe pattern stability. Besides a control condition without feedback, correct feedback (representing the actual hand movement) or mirrored feedback (representing the hand movement transformed by 180 degrees) were provided during tracking, resulting in either in-phase or antiphase visual motion of the target and feedback signal, depending on the tracking mode performed. The quality (accuracy and stability) of in-phase tracking was hardly affected by the two forms of feedback, whereas antiphase tracking clearly benefited from mirrored feedback but not from correct feedback. This finding extends previous results indicating that the performance of visuo-motor coordination tasks is aided by visual feedback manipulations resulting in coherently grouped (i.e., in-phase) visual motion structures. Further insights into visuo-motor tracking with and without feedback were garnered from the visual search patterns accompanying task performance. Smooth pursuit eye movements only occurred at lower oscillation frequencies and prevailed during in-phase tracking and when target and feedback signal moved in phase. At higher frequencies, point-of-gaze was fixated at a location that depended on the feedback provided and the resulting visual motion structures. During in-phase tracking the mirrored feedback was ignored, which explains why performance was not affected in this condition. Point-of-gaze fixations at one of the end-points were accompanied by reduced motor variability at this location, reflecting a form of visuo-motor anchoring that may support the pick up of discrete information as well as the control of hand movements to a desired location. PMID:16087264

  2. Automatic guidance of attention during real-world visual search.

    PubMed

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  3. Visual search strategies in experienced and inexperienced soccer players.

    PubMed

    Williams, A M; Davids, K; Burwitz, L; Williams, J G

    1994-06-01

    This study investigated skill-based differences in anticipation and visual search strategy within open-play situations in soccer. Experienced (n = 15) and inexperienced (n = 15) subjects were required to anticipate pass destination from filmed soccer sequences viewed on a large 3-m x 3-m video projection screen. MANCOVA showed that experienced soccer players demonstrated superior anticipatory performance. Univariate analyses revealed between-group differences in speed of response but not in response accuracy. Also, inexperienced players fixated more frequently on the ball and the player passing the ball, whereas experienced players fixated on peripheral aspects of the display, such as the positions and movements of other players. The experienced group fixated on significantly more locations than their inexperienced counterparts. Further differences were noted in search rate, with experienced players exhibiting more fixations of shorter duration. The experienced group's higher search rate contradicted previous research. However, this resulted from using 11 on 11 film sequences, which were never previously used in visual search research. The increased frequency of eye fixations was regarded as being more advantageous for anticipating pass destination during open play in soccer. Finally, a number of practical implications were highlighted. PMID:8047704

  4. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  5. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. PMID:26529685

  6. Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes

    PubMed Central

    Preston, Tim J.; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P.

    2014-01-01

    Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers’ expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC’s ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers’ behavioral task, the MVPA analysis of LOC and the other areas’ activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. PMID:23637176

  7. Neural substrates for visual pattern recognition learning in Igo.

    PubMed

    Itoh, Kosuke; Kitamura, Hideaki; Fujii, Yukihiko; Nakada, Tsutomu

    2008-08-28

    Different contexts require different visual pattern recognitions even for identical retinal inputs, and acquiring expertise in various visual-cognitive skills requires long-term training to become capable of recognizing relevant visual patterns in otherwise ambiguous stimuli. This 3-Tesla fMRI experiment exploited shikatsu-mondai (life-or-death problems) in the Oriental board game of Igo (Go) to identify the neural substrates supporting this gradual and adaptive learning. In shikatsu-mondai, the player adds stones to the board with the objective of making, or preventing the opponent from making nigan (two eyes), or the topology of figure of eight, with these stones. Without learning the game, passive viewing of shikatsu-mondai activated the occipito-temporal cortices, reflecting visual processing without the recognition of nigan. Several days after two-hour training, passive viewing of the same stimuli additionally activated the premotor area, intraparietal sulcus, and a visual area near the junction of the (left) intraparietal and transverse occipital sulci, demonstrating plastic changes in neuronal responsivity to the stimuli that contained indications of nigan. Behavioral tests confirmed that the participants had successfully learned to recognize nigan and solve the problems. In the newly activated regions, the level of neural activity while viewing the problems correlated positively with the level of achievement in learning. These results conformed to the hypothesis that recognition of a newly learned visual pattern is supported by the activities of fronto-parietal and visual cortical neurons that interact via newly formed functional connections among these regions. These connections would provide the medium by which the fronto-parietal system modulates visual cortical activity to attain behaviorally relevant perceptions. PMID:18621033

  8. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  9. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

  10. Impaired top-down control of visual search in schizophrenia.

    PubMed

    Gold, James M; Fuller, Rebecca L; Robinson, Benjamin M; Braun, Elsie L; Luck, Steven J

    2007-08-01

    This study examined top-down and bottom-up control of attention in a group of 24 patients with schizophrenia and 16 healthy volunteers. Participants completed a visual search task in which they reported whether a target oval contained a gap. The target was accompanied by 5, 11, or 17 distractors. On some trials, the target was identified by a highly salient feature that was shared by only 2 distractors, causing this feature to "pop out" from the display. This feature provided strong bottom-up information that could be used to direct attention to the target. On other trials, half of the distractors contained this feature making these distractors no more salient than the other distractors requiring greater use of top-down control to restrict processing to items containing this feature. Patient visual search efficiency closely approximated control performance in the first trial type. In contrast, patients demonstrated significant slowing of search in the second trial type, which required top-down control. These results suggest that schizophrenia does not impair the ability to implement the selection of a target when attention can be guided by bottom-up information, but it does impair the ability to use top-down control mechanisms to guide attention. These results extend prior studies that have focused on aspects of executive control in complex tasks and suggest that a similar underlying deficit may also impact the performance of perceptual systems. PMID:17544632

  11. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  12. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  13. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  14. Audio-visual object search is changed by bilingual experience.

    PubMed

    Chabal, Sarah; Schroeder, Scott R; Marian, Viorica

    2015-11-01

    The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. PMID:26272368

  15. Visualizing Information in the Biological Sciences: Using WebTheme to Visualize Internet Search Results

    SciTech Connect

    Buxton, Karen A.; Lembo, Mary Frances

    2003-08-11

    Information visualization is an effective method for displaying large data sets in a pictorial or graphical format. The visualization aids researchers and analysts in understanding data by evaluating the content and grouping documents together around themes and concepts. With the ever-growing amount of information available on the Internet, additional methods are needed to analyze and interpret data. WebTheme allows users to harvest thousands of web pages and automatically organize and visualize their contents. WebTheme is an interactive web-based product that provides a new way to investigate and understand large volumes of HTML text-based information. It has the ability to harvest data from the World Wide Web using search terms and selected search engines or by following URLs chosen by the user. WebTheme enables users to rapidly identify themes and concepts found among thousands of pages of text harvested and provides a suite of tools to further explore and analyze special areas of interest within a data set. WebTheme was developed at Pacific Northwest National Laboratory (PNNL) for NASA as a method for generating meaningful, thematic, and interactive visualizations. Through a collaboration with the Laboratory's Information Science and Engineering (IS&E) group, information specialists are providing demonstrations of WebTheme and assisting researchers in analyzing their results. This paper will provide a brief overview of the WebTheme product, and the ways in which the Hanford Technical Library's information specialists are assisting researchers in using this product.

  16. Simulating cooperative behavior in human collective search pattern.

    PubMed

    Li, Keping; Gao, Ziyou

    2012-08-01

    In the world, great natural disasters frequently occur. Along with these disasters, large-scale cooperative searches for missing persons are exigent. Because of the lack of experiments to reproduce the disaster rescue processes, our understanding of how to regulate the collective cooperative searches is still elusive. Here we use an improved Lévy walk model to simulate the rescuers' movements in which direction choice is considered. In our study, we systematically analyze the diffusive mechanism of rescuers' movements, and find that the search pattern shows a high degree of spatial order which displays some inherent features. Our results also indicate that cooperative search promotes rescuers' movements to disperse determinately. PMID:22350073

  17. Evolving the stimulus to fit the brain: a genetic algorithm reveals the brain's feature priorities in visual search.

    PubMed

    Van der Burg, Erik; Cass, John; Theeuwes, Jan; Alais, David

    2015-01-01

    How does the brain find objects in cluttered visual environments? For decades researchers have employed the classic visual search paradigm to answer this question using factorial designs. Although such approaches have yielded important information, they represent only a tiny fraction of the possible parametric space. Here we use a novel approach, by using a genetic algorithm (GA) to discover the way the brain solves visual search in complex environments, free from experimenter bias. Participants searched a series of complex displays, and those supporting fastest search were selected to reproduce (survival of the fittest). Their display properties (genes) were crossed and combined to create a new generation of "evolved" displays. Displays evolved quickly over generations towards a stable, efficiently searched array. Color properties evolved first, followed by orientation. The evolved displays also contained spatial patterns suggesting a coarse-to-fine search strategy. We argue that this behavioral performance-driven GA reveals the way the brain selects information during visual search in complex environments. We anticipate that our approach can be adapted to a variety of sensory and cognitive questions that have proven too intractable for factorial designs. PMID:25761347

  18. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  19. Vibrio coralliilyticus Search Patterns across an Oxygen Gradient

    PubMed Central

    Winn, Karina M.; Bourne, David G.; Mitchell, James G.

    2013-01-01

    The coral pathogen, Vibrio coralliilyticus shows specific chemotactic search pattern preference for oxic and anoxic conditions, with the newly identified 3-step flick search pattern dominating the patterns used in oxic conditions. We analyzed motile V. coralliilyticus cells for behavioral changes with varying oxygen concentrations to mimic the natural coral environment exhibited during light and dark conditions. Results showed that 3-step flicks were 1.4× (P?=?0.006) more likely to occur in oxic conditions than anoxic conditions with mean values of 18 flicks (95% CI?=?0.4, n?=?53) identified in oxic regions compared to 13 (95% CI?=?0.5, n?=?38) at anoxic areas. In contrast, run and reverse search patterns were more frequent in anoxic regions with a mean value of 15 (95% CI?=?0.7, n?=?46), compared to a mean value of 10 (95% CI?=?0.8, n?=?29) at oxic regions. Straight swimming search patterns remained similar across oxic and anoxic regions with a mean value of 13 (95% CI?=?0.7, n?=?oxic: 13, anoxic: 14). V. coralliilyticus remained motile in oxic and anoxic conditions, however, the 3-step flick search pattern occurred in oxic conditions. This result provides an approach to further investigate the 3-step flick. PMID:23874480

  20. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  1. Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets

    ERIC Educational Resources Information Center

    Starreveld, Peter A.; Theeuwes, Jan; Mortier, Karen

    2004-01-01

    The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those…

  2. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  3. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  4. Performance Evaluation of Full Search Equivalent Pattern Matching Algorithms.

    PubMed

    Wanli Ouyang; Tombari, F; Mattoccia, S; Di Stefano, L; Wai-Kuen Cham

    2012-01-01

    Pattern matching is widely used in signal processing, computer vision, and image and video processing. Full search equivalent algorithms accelerate the pattern matching process and, in the meantime, yield exactly the same result as the full search. This paper proposes an analysis and comparison of state-of-the-art algorithms for full search equivalent pattern matching. Our intention is that the data sets and tests used in our evaluation will be a benchmark for testing future pattern matching algorithms, and that the analysis concerning state-of-the-art algorithms could inspire new fast algorithms. We also propose extensions of the evaluated algorithms and show that they outperform the original formulations. PMID:21576734

  5. Local and global factors of similarity in visual search.

    PubMed

    von Grünau, M; Dubé, S; Galera, C

    1994-05-01

    Effects of the similarity between target and distractors in a visual search task were investigated in several experiments. Both familiar (numerals and letters) and unfamiliar (connected figures in a 5 x 5 matrix) stimuli were used. The observer had to report on the presence or absence of a target among a variable number of homogeneous distractors as fast and as accurately as possible. It was found that physical difference had the same clear effect on processing time for familiar and for unfamiliar stimuli: processing time decreased monotonically with increasing physical difference. Distractors unrelated to the target and those related to the target by a simple transformation (180 degrees rotation, horizontal or vertical reflection) were also compared, while the physical difference was kept constant. For familiar stimuli, transformational relatedness increased processing time in comparison with that for unrelated stimulus pairs. It was further shown in a scaling experiment that this effect could be accounted for by the amount of perceived similarity of the target-distractor pairs. For unfamiliar stimuli, transformational relatedness did have a smaller and less pronounced effect. Various comparable unrelated distractors resulted in a full range of processing times. Results from a similarity scaling experiment correlated well with the outcome of the experiments with unfamiliar stimuli. These results are interpreted in terms of an underlying continuum of perceived similarity as the basis of the speed of visual search, rather than a dichotomy of parallel versus serial processing. PMID:8008558

  6. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms.

    PubMed

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H

    2015-06-29

    In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a "fractionable" phenotype model of autism. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves, and up to another 30% manifest elevated levels of autism symptoms. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI;), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS;). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  7. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  8. Pattern Visual Evoked Potentials Elicited by Organic Electroluminescence Screen

    PubMed Central

    Matsumoto, Celso Soiti; Shinoda, Kei; Matsumoto, Harue; Funada, Hideaki; Minoda, Haruka

    2014-01-01

    Purpose. To determine whether organic electroluminescence (OLED) screens can be used as visual stimulators to elicit pattern-reversal visual evoked potentials (p-VEPs). Method. Checkerboard patterns were generated on a conventional cathode-ray tube (S710, Compaq Computer Co., USA) screen and on an OLED (17 inches, 320 × 230 mm, PVM-1741, Sony, Tokyo, Japan) screen. The time course of the luminance changes of each monitor was measured with a photodiode. The p-VEPs elicited by these two screens were recorded from 15 eyes of 9 healthy volunteers (22.0 ± 0.8 years). Results. The OLED screen had a constant time delay from the onset of the trigger signal to the start of the luminescence change. The delay during the reversal phase from black to white for the pattern was 1.0 msec on the cathode-ray tube (CRT) screen and 0.5 msec on the OLED screen. No significant differences in the amplitudes of P100 and the implicit times of N75 and P100 were observed in the p-VEPs elicited by the CRT and the OLED screens. Conclusion. The OLED screen can be used as a visual stimulator to elicit p-VEPs; however the time delay and the specific properties in the luminance change must be taken into account. PMID:25197652

  9. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes. PMID:20508452

  10. Attention modulates visual-tactile interaction in spatial pattern matching.

    PubMed

    Göschl, Florian; Engel, Andreas K; Friese, Uwe

    2014-01-01

    Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an 'all-or-nothing' manner. PMID:25203102

  11. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-01-01

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations. PMID:24190911

  12. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  13. Innovative pattern reversal displays for visual electrophysiological studies.

    PubMed

    Toft-Nielsen, J; Bohorquez, J; Ozdamar, O

    2011-01-01

    Pattern Reversal (PR) stimulation is a frequently used tool in the evaluation of the visual pathway. The PR stimulus consists of a field of black and white segments (usually checks or bars) of constant luminance, which change phase (black to white and white to black) at a given reversal rate. The Pattern Electroretinogram (PERG) is a biological potential that is evoked from the retina upon viewing PR display. Likewise, the Pattern Visual Evoked Potential (PVEP) is a biological potential recorded from the occipital cortex when viewing a PR display. Typically, PR stimuli are presented on a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor. This paper presents three modalities to generate pattern reversal stimuli. The three methods are as follows: a display consisting of array of Light Emitting Diodes (LEDs), a display comprised of two miniature projectors, and a display utilizing a modified LCD display in conjunction with a variable polarizer. The proposed stimulators allow for the recording of PERG and PVEP waveforms at much higher rates than are capable with conventional stimulators. Additionally, all three of the alternative PR displays will be able to take advantage of advanced analysis techniques, such as the recently developed Continuous Loop Averaging Deconvolution (CLAD) algorithm. PMID:22254729

  14. Pupil diameter reflects uncertainty in attentional selection during visual search

    PubMed Central

    Geng, Joy J.; Blumenfeld, Zachary; Tyson, Terence L.; Minzenberg, Michael J.

    2015-01-01

    Pupil diameter has long been used as a metric of cognitive processing. However, recent advances suggest that the cognitive sources of change in pupil size may reflect LC-NE function and the calculation of unexpected uncertainty in decision processes (Aston-Jones and Cohen, 2005; Yu and Dayan, 2005). In the current experiments, we explored the role of uncertainty in attentional selection on task-evoked changes in pupil diameter during visual search. We found that task-evoked changes in pupil diameter were related to uncertainty during attentional selection as measured by reaction time (RT) and performance accuracy (Experiments 1-2). Control analyses demonstrated that the results are unlikely to be due to error monitoring or response uncertainty. Our results suggest that pupil diameter can be used as an implicit metric of uncertainty in ongoing attentional selection requiring effortful control processes. PMID:26300759

  15. Underestimating numerosity of items in visual search tasks.

    PubMed

    Cassenti, Daniel N; Kelley, Troy D; Ghirardelli, Thomas G

    2010-10-01

    Previous research on numerosity judgments addressed attended items, while the present research addresses underestimation for unattended items in visual search tasks. One potential cause of underestimation for unattended items is that estimates of quantity may depend on viewing a large portion of the display within foveal vision. Another theory follows from the occupancy model: estimating quantity of items in greater proximity to one another increases the likelihood of an underestimation error. Three experimental manipulations addressed aspects of underestimation for unattended items: the size of the distracters, the distance of the target from fixation, and whether items were clustered together. Results suggested that the underestimation effect for unattended items was best explained within a Gestalt grouping framework. PMID:21162441

  16. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms

    PubMed Central

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H.; Baron-Cohen, Simon; Bolton, Patrick; Cheung, Celeste; Davies, Kim; Liew, Michelle; Fernandes, Janice; Gammer, Issy; Maris, Helen; Salomone, Erica; Pasco, Greg; Pickles, Andrew; Ribeiro, Helena; Tucker, Leslie

    2015-01-01

    Summary In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception [1]. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils [2–5]. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated [4, 6]. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a “fractionable” phenotype model of autism [7]. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves [8], and up to another 30% manifest elevated levels of autism symptoms [9]. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI; [10]), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS; [11]). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  17. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order. Thus, in "London Bridge is falling down," "London" and "down" were found no faster than "falling." PMID:25788035

  18. Advanced analysis of free visual exploration patterns in schizophrenia

    PubMed Central

    Sprenger, Andreas; Friedrich, Monique; Nagel, Matthias; Schmidt, Christiane S.; Moritz, Steffen; Lencer, Rebekka

    2013-01-01

    Background: Visual scanpath analyses provide important information about attention allocation and attention shifting during visual exploration of social situations. This study investigated whether patients with schizophrenia simply show restricted free visual exploration behavior reflected by reduced saccade frequency and increased fixation duration or whether patients use qualitatively different exploration strategies than healthy controls. Methods: Scanpaths of 32 patients with schizophrenia and age-matched 33 healthy controls were assessed while participants freely explored six photos of daily life situations (20 s/photo) evaluated for cognitive complexity and emotional strain. Using fixation and saccade parameters, we compared temporal changes in exploration behavior, cluster analyses, attentional landscapes, and analyses of scanpath similarities between both groups. Results: We found fewer fixation clusters, longer fixation durations within a cluster, fewer changes between clusters, and a greater increase of fixation duration over time in patients compared to controls. Scanpath patterns and attentional landscapes in patients also differed significantly from those of controls. Generally, cognitive complexity and emotional strain had significant effects on visual exploration behavior. This effect was similar in both groups as were physical properties of fixation locations. Conclusions: Longer attention allocation to a given feature in a scene and less attention shifts in patients suggest a more focal processing mode compared to a more ambient exploration strategy in controls. These visual exploration alterations were present in patients independently of cognitive complexity, emotional strain or physical properties of visual cues implying that they represent a rather general deficit. Despite this impairment, patients were able to adapt their scanning behavior to changes in cognitive complexity and emotional strain similar to controls. PMID:24130547

  19. Case role filling as a side effect of visual search

    SciTech Connect

    Marburger, H.; Wahlster, W.

    1983-01-01

    This paper addresses the problem of generating communicatively adequate extended responses in the absence of specific knowledge concerning the intentions of the questioner. The authors formulate and justify a heuristic for the selection of optional deep case slots not contained in the question as candidates for the additional information contained in an extended response. It is shown that, in a visually present domain of discourse, case role filling for the construction of an extended response can be regarded as a side effect of the visual search necessary to answer a question containing a locomotion verb. The paper describes the various representation constructions used in the German language dialog system HAM-ANS for dealing with the semantics of locomotion verbs and illustrates their use in generating extended responses. In particular, it outlines the structure of the geometrical scene description, the representation of events in a logic-oriented semantic representation language, the case-frame lexicon and the representation of the referential semantics based on the flavor system. The emphasis is on a detailed presentation of the application of object-oriented programming methods for coping with the semantics of locomotion verbs. The process of generating an extended response is illustrated by an extensively annotated trace. 13 references.

  20. Expectations developed over multiple timescales facilitate visual search performance

    PubMed Central

    Gekas, Nikos; Seitz, Aaron R.; Seriès, Peggy

    2015-01-01

    Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level. PMID:26200891

  1. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  2. Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting.

    PubMed

    Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith

    2002-03-01

    We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics. PMID:11999481

  3. Efficient visual search of videos cast as text retrieval.

    PubMed

    Sivic, Josef; Zisserman, Andrew

    2009-04-01

    We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films 'Groundhog Day', 'Casablanca' and 'Run Lola Run', including searches from within the movie and specified by external images downloaded from the Internet. We investigate retrieval performance with respect to different quantizations of region descriptors and compare the performance of several ranking measures. Performance is also compared to a baseline method implementing standard frame to frame matching. PMID:19229077

  4. The effect of spectrally selective filters on visual search performance.

    PubMed

    Chisum, G T; Sheehy, J B; Morway, P E; Askew, G K

    1987-05-01

    The effect of five spectrally selective filters on the performance of an acuity-dependent visual search task was evaluated. The filters were: A) a neutral density filter (control condition); B) a 5200A green interference filter; C) a 3215-250 red filter; D) a neodymium visor; and E) a holographic visor. The observers were presented with 5 blocks of 10 slides per filter. Each slide projected a 6 degrees X 6 degrees field of 900 letter O's--each 10' of arc--which contained a single Landolt C. The observers were required to find the C and indicate the position of the opening in the C. The opening in the C subtended 2.64' corresponding to an acuity of 0.38. Response time, error rate, accommodative accuracy, and the number and duration of fixations were recorded for each slide presentation. The results demonstrated that filter type had no effect on any of the response measures. During the first three trial blocks, the observers appeared to optimize their search strategies, after which they began to revert to their initial performance levels. However, this effect was not supported statistically. PMID:3593145

  5. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback. PMID:26529699

  6. Efficient visual-search model observers for PET

    PubMed Central

    2014-01-01

    Objective: Scanning model observers have been efficiently applied as a research tool to predict human-observer performance in F-18 positron emission tomography (PET). We investigated whether a visual-search (VS) observer could provide more reliable predictions with comparable efficiency. Methods: Simulated two-dimensional images of a digital phantom featuring tumours in the liver, lungs and background soft tissue were prepared in coronal, sagittal and transverse display formats. A localization receiver operating characteristic (LROC) study quantified tumour detectability as a function of organ and format for two human observers, a channelized non-prewhitening (CNPW) scanning observer and two versions of a basic VS observer. The VS observers compared watershed (WS) and gradient-based search processes that identified focal uptake points for subsequent analysis with the CNPW observer. The model observers treated “background-known-exactly” (BKE) and “background-assumed-homogeneous” assumptions, either searching the entire organ of interest (Task A) or a reduced area that helped limit false positives (Task B). Performance was indicated by area under the LROC curve. Concordance in the localizations between observers was also analysed. Results: With the BKE assumption, both VS observers demonstrated consistent Pearson correlation with humans (Task A: 0.92 and Task B: 0.93) compared with the scanning observer (Task A: 0.77 and Task B: 0.92). The WS VS observer read 624 study test images in 2.0?min. The scanning observer required 0.7?min. Conclusion: Computationally efficient VS can enhance the stability of statistical model observers with regard to uncertainties in PET tumour detection tasks. Advances in knowledge: VS models improve concordance with human observers. PMID:24837105

  7. Searching for signs, symbols, and icons: effects of time of day, visual complexity, and grouping.

    PubMed

    McDougall, Siné; Tyrer, Victoria; Folkard, Simon

    2006-06-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual search were examined: the time of day at which search was carried out, the visual complexity of the icons, and the extent to which information features in the icon were grouped together. The speed with which participants searched icon arrays for a target was slower early in the afternoon, when icons were visually complex and when information features in icons were not grouped together to form a single object. Theories of attention that account for both feature-based and object-based search best explain these findings and are used to form the basis for ways of improving icon design. PMID:16802893

  8. Active sensing in the categorization of visual patterns.

    PubMed

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. PMID:26880546

  9. Adaptive two-scale edge detection for visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    2009-09-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvocellular (P) and magnocellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong nonlinear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information and gracefully shifts from small-scale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the P- and M-channels of the human visual system. We also examine the case of adapting to a third image condition-namely, too little visual information-and automatically adjust edge-detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  10. Visual pattern memory requires foraging function in the central complex of Drosophila

    PubMed Central

    Wang, Zhipeng; Pan, Yufeng; Li, Weizhe; Jiang, Huoqing; Chatzimanolis, Lazaros; Chang, Jianhong; Gong, Zhefeng; Liu, Li

    2008-01-01

    The role of the foraging (for) gene, which encodes a cyclic guanosine-3′,5′-monophosphate (cGMP)-dependent protein kinase (PKG), in food-search behavior in Drosophila has been intensively studied. However, its functions in other complex behaviors have not been well-characterized. Here, we show experimentally in Drosophila that the for gene is required in the operant visual learning paradigm. Visual pattern memory was normal in a natural variant rover (forR) but was impaired in another natural variant sitter (forS), which has a lower PKG level. Memory defects in forS flies could be rescued by either constitutive or adult-limited expression of for in the fan-shaped body. Interestingly, we showed that such rescue also occurred when for was expressed in the ellipsoid body. Additionally, expression of for in the fifth layer of the fan-shaped body restored sufficient memory for the pattern parameter “elevation” but not for “contour orientation,” whereas expression of for in the ellipsoid body restored sufficient memory for both parameters. Our study defines a Drosophila model for further understanding the role of cGMP-PKG signaling in associative learning/memory and the neural circuit underlying this for-dependent visual pattern memory. PMID:18310460

  11. Flow pattern visualization in a mimic anaerobic digester using CFD.

    PubMed

    Vesvikar, Mehul S; Al-Dahhan, Muthanna

    2005-03-20

    Three-dimensional steady-state computational fluid dynamics (CFD) simulations were performed in mimic anaerobic digesters to visualize their flow pattern and obtain hydrodynamic parameters. The mixing in the digester was provided by sparging gas at three different flow rates. The gas phase was simulated with air and the liquid phase with water. The CFD results were first evaluated using experimental data obtained by computer automated radioactive particle tracking (CARPT). The simulation results in terms of overall flow pattern, location of circulation cells and stagnant regions, trends of liquid velocity profiles, and volume of dead zones agree reasonably well with the experimental data. CFD simulations were also performed on different digester configurations. The effects of changing draft tube size, clearance, and shape of the tank bottoms were calculated to evaluate the effect of digester design on its flow pattern. Changing the draft tube clearance and height had no influence on the flow pattern or dead regions volume. However, increasing the draft tube diameter or incorporating a conical bottom design helped in reducing the volume of the dead zones as compared to a flat-bottom digester. The simulations showed that the gas flow rate sparged by a single point (0.5 cm diameter) sparger does not have an appreciable effect on the flow pattern of the digesters at the range of gas flow rates used. PMID:15685599

  12. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  13. Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

  14. Preemption Effects in Visual Search: Evidence for Low-Level Grouping.

    ERIC Educational Resources Information Center

    Rensink, Ronald A.; Enns, James T.

    1995-01-01

    Eight experiments, each with 10 observers in each condition, show that the visual search for Mueller-Lyer stimuli is based on complete configurations rather than component segments with preemption by low-level groups. Results support the view that rapid visual search can only access higher level, more ecologically relevant structures. (SLD)

  15. Visualizing Neuronal Network Connectivity with Connectivity Pattern Tables

    PubMed Central

    Nordlie, Eilen; Plesser, Hans Ekkehard

    2009-01-01

    Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs) as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables. PMID:20140265

  16. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness. PMID:20973730

  17. Visual-auditory integration for visual search: a behavioral study in barn owls.

    PubMed

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905

  18. Visual-auditory integration for visual search: a behavioral study in barn owls

    PubMed Central

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905

  19. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  20. The Importance of Slow Consistent Movement when Searching for Hard-to-Find Targets in Real-World Visual Search.

    PubMed

    Riggs, Charlotte; Cornes, Katherine; Godwin, Hayward; Guest, Richard; Donnelly, Nick

    2015-09-01

    Various real-world tasks require careful and exhaustive visual search. For example, searching for forensic evidence or signs of hidden threats (what we call hard-to-find targets). Here, we examine how search accuracy for hard-to-find targets is influenced by search behaviour. Participants searched for coins set amongst a 5m x 15m (defined as x and y axes respectively) piece of grassland. The grassland contained natural distractors of leaves and flowers and was not manicured. Coins were visually detectable from standing height. There was no time limit to the task and participants were instructed to search until they were confident they had completed their search. On average, participants detected 45% (SD=23%) of the targets and took 7:23 (SD=4:44) minutes to complete their search. Participants' movement over space and time was recorded as a series of time-stamped x, y coordinates using a Total Station theodolite. To quantify their search behaviour, the x- and y-coordinates of participants' physical locations as they searched the grassland were converted into the frequency domain using a Fourier transform. Decreases in dominant frequencies, a measure of the time before turning during search, resulted in increased response accuracy as well as increased search times. Furthermore, decreases in the number of iterations, defined by the total search time divided by the dominant frequency, also resulted in increased accuracy and search times. Comparing distance between the two most dominant frequency peaks provided a measure of consistency of movement over time. This measure showed that more variable search was associated with slower search times but no improvement in accuracy. Throughout our analyses, these results were true for the y-axis but not the x-axis. At least with respect to the present task, accurate search for hard-to-find targets is dependent on conducting search at a slow consistent speed where changes in direction are minimised. Meeting abstract presented at VSS 2015. PMID:26327043

  1. Pattern Visual Evoked Potentials in Dyslexic versus Normal Children

    PubMed Central

    Heravian, Javad; Sobhani-Rad, Davood; Lari, Samaneh; Khoshsima, Mohamadjavad; Azimi, Abbas; Ostadimoghaddam, Hadi; Yekta, Abbasali; Hoseini-Yazdi, Seyed Hosein

    2015-01-01

    Purpose: Presence of neurophysiological abnormalities in dyslexia has been a conflicting issue. This study was performed to evaluate the role of sensory visual deficits in the pathogenesis of dyslexia. Methods: Pattern visual evoked potentials (PVEP) were recorded in 72 children including 36 children with dyslexia and 36 children without dyslexia (controls) who were matched for age, sex and intelligence. Two check sizes of 15 and 60 min of arc were used with temporal frequencies of 1.5 Hz for transient and 6 Hz for steady-state methods. Results: Mean latency and amplitude values for 15 min arc and 60 min arc check sizes using steady state and transient methods showed no significant difference between the two study groups (P values: 0.139/0.481/0.356/0.062). Furthermore, no significant difference was observed between two methods of PVEPs in dyslexic and normal children using 60 min arc with high contrast (P values: 0.116, 0.402, 0.343 and 0.106). Conclusion: The sensitivity of PVEP has high validity to detect visual deficits in children with dyslexic problem. However, no significant difference was found between dyslexia and normal children using high contrast stimuli. PMID:26730313

  2. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search

  3. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  4. Using visual analytics model for pattern matching in surveillance data

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  5. Dynamic analysis and pattern visualization of forest fires.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  6. Tools for visualizing landscape pattern for large geographic areas

    SciTech Connect

    Timmins, S.P.; Hunsaker, C.T.

    1993-10-01

    Landscape pattern can be modelled on a grid with polygons constructed from cells that share edges. Although this model only allows connections in four directions, programming is convenient because both coordinates and attributes take discrete integer values. A typical raster land-cover data set is a multimegabyte matrix of byte values derived by classification of images or gridding of maps. Each matrix may have thousands of raster polygons (patches), many of them islands inside other larger patches. These data sets have complex topology that can overwhelm vector geographic information systems. The goal is to develop tools to quantify change in the landscape structure in terms of the shape and spatial distribution of patches. Three milestones toward this goal are (1) creating polygon topology on a grid, (2) visualizing patches, and (3) analyzing shape and pattern. An efficient algorithm has been developed to locate patches, measure area and perimeter, and establish patch topology. A powerful visualization system with an extensible programming language is used to write procedures to display images and perform analysis.

  7. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  8. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. PMID:24051777

  9. Bicycle accidents and drivers' visual search at left and right turns.

    PubMed

    Summala, H; Pasanen, E; Räsänen, M; Sievänen, J

    1996-03-01

    The accident data base of the City of Helsinki shows that when drivers cross a cycle path as they enter a non-signalized intersection, the clearly dominant type of car-cycle crashes is that in which a cyclist comes from the right and the driver is turning right, in marked contrast to the cases with drivers turning left (Pasanen 1992; City of Helsinki, Traffic Planning Department, Report L4). This study first tested an explanation that drivers turning right simply focus their attention on the cars coming from the left-those coming from the right posing no threat to them-and fail to see the cyclist from the right early enough. Drivers' scanning behavior was studied at two T-intersections. Two well-hidden video cameras were used, one to measure the head movements of the approaching drivers and the other one to measure speed and distance from the cycle crossroad. The results supported the hypothesis: the drivers turning right scanned the right leg of the T-intersection less frequently and later than those turning left. Thus, it appears that drivers develop a visual scanning strategy which concentrates on detection of more frequent and major dangers but ignores and may even mask visual information on less frequent dangers. The second part of the study evaluated different countermeasures, including speed humps, in terms of drivers' visual search behavior. The results suggested that speed-reducing countermeasures changed drivers' visual search patterns in favor of the cyclists coming from the right, presumably at least in part due to the fact that drivers were simply provided with more time to focus on each direction. PMID:8703272

  10. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  11. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  12. Visual Search Is Postponed during the Attentional Blink until the System Is Suitably Reconfigured

    ERIC Educational Resources Information Center

    Ghorashi, S. M. Shahab; Smilek, Daniel; Di Lollo, Vincent

    2007-01-01

    J. S. Joseph, M. M. Chun, and K. Nakayama (1997) found that pop-out visual search was impaired as a function of intertarget lag in an attentional blink (AB) paradigm in which the 1st target was a letter and the 2nd target was a search display. In 4 experiments, the present authors tested the implication that search efficiency should be similarly…

  13. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  14. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  15. Electrophysiological measurement of information flow during visual search.

    PubMed

    Cosman, Joshua D; Arita, Jason T; Ianni, Julianna D; Woodman, Geoffrey F

    2016-04-01

    The temporal relationship between different stages of cognitive processing is long debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that, when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy, response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  16. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PMID:20565197

  17. Using Pattern Search Methods for Surface Structure Determinationof Nanomaterials

    SciTech Connect

    Zhao, Zhengji; Meza, Juan; Van Hove, Michel

    2006-06-09

    Atomic scale surface structure plays an important roleindescribing many properties of materials, especially in the case ofnanomaterials. One of the most effective techniques for surface structuredetermination is low-energy electron diffraction (LEED), which can beused in conjunction with optimization to fit simulated LEED intensitiesto experimental data. This optimization problem has a number ofcharacteristics that make it challenging: it has many local minima, theoptimization variables can be either continuous or categorical, theobjective function can be discontinuous, there are no exact analyticderivatives (and no derivatives at all for categorical variables), andfunction evaluations are expensive. In this study, we show how to apply aparticular class of optimization methods known as pattern search methodsto address these challenges. These methods donot explicitly usederivatives, and are particularly appropriate when categorical variablesare present, an important feature that has not been addressed in previousLEED studies. We have found that pattern search methods can produceexcellent results, compared to previously used methods, both in terms ofperformance and locating optimal results.

  18. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    SciTech Connect

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  19. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  20. The role of visual pattern persistence in bistable stroboscopic motion.

    PubMed

    Breitmeyer, B G; Ritter, A

    1986-01-01

    Two alternating frames, each consisting of three square elements, were used to study bistable stroboscopic motion percepts. Bistable percepts were obtained which depend on the interstimulus interval (ISI) between the alternating frames. At short ISIs only end-to-end element motion was observed; and at higher ISIs only group motion was perceived. It was found that the progressive ISI-dependent transitions from element to group motion depended on element size and frame duration. These dependencies are predictable from the systematic influence which these variables are known also to exert on visual pattern persistence, indicating that such persistence contributes to determining which precept dominates during bistable stroboscopic motion sequences. These findings bear relevantly on recent attempts to conceptually relate bistable motion percepts to short-range stroboscopic motion processes. PMID:3617522

  1. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  2. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. PMID:25695154

  3. Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations

    NASA Astrophysics Data System (ADS)

    Bays, Hillary

    An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

  4. Visual search for features and conjunctions following declines in the useful field of view

    PubMed Central

    Cosman, Joshua D.; Lees, Monica N.; Lee, John D.; Rizzo, Matthew; Vecera, Shaun P.

    2013-01-01

    Background/Study Context Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. Methods The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Results Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. Conclusion The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines. PMID:22830667

  5. Development of Pattern Classification: Auditory-Visual Equivalence in the Use of Prototypes

    ERIC Educational Resources Information Center

    Williams, Tannis MacBeth; Aiken, Leona S.

    1977-01-01

    Development of the relation between skills of visual and auditory pattern classification was studied at the second grade, sixth grade, and adult age levels using visual and auditory representations of the same abstract information. Results showed evidence of common processing of pattern class structure for the modalities, patterns, prototypes, and…

  6. Threat modulation of visual search efficiency in PTSD: A comparison of distinct stimulus categories.

    PubMed

    Olatunji, Bunmi O; Armstrong, Thomas; Bilsky, Sarah A; Zhao, Mimi

    2015-10-30

    Although an attentional bias for threat has been implicated in posttraumatic stress disorder (PTSD), the cues that best facilitate this bias are unclear. Some studies utilize images and others utilize facial expressions that communicate threat. However, the comparability of these two types of stimuli in PTSD is unclear. The present study contrasted the effects of images and expressions with the same valence on visual search among veterans with PTSD and controls. Overall, PTSD patients had slower visual search speed than controls. Images caused greater disruption in visual search than expressions, and emotional content modulated this effect with larger differences between images and expressions arising for more negatively valenced stimuli. However, this effect was not observed with the maximum number of items in the search array. Differences in visual search speed by images and expressions significantly varied between PTSD patients and controls for only anger and at the moderate level of task difficulty. Specifically, visual search speed did not significantly differ between PTSD patients and controls when exposed to angry expressions. However, PTSD patients displayed significantly slower visual search than controls when exposed to anger images. The implications of these findings for better understanding emotion modulated attention in PTSD are discussed. PMID:26254798

  7. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  8. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  9. Phonological Interference in Visual Search: Object Names are Automatically Activated in Non-Linguistic Tasks.

    PubMed

    Walenchok, Stephen; Hout, Michael; Goldinger, Stephen

    2015-09-01

    During visual search, it is well known that items sharing visual similarity with the target create interference (e.g., searching for a baseball among softballs vs. a baseball among bats). Although such a task is inherently visual, might linguistic similarity between target and background items' names also create interference? We conducted several experiments in which people searched for either one or three potential targets, among a background of distractors that either shared a phonological overlap with the target(s) (e.g., "beast" and "beanstalk") or had no overlap (e.g., "beast" and "glasses"). Experiment 1 involved standard oculomotor search, Experiment 2 presented a serial search task in which participants manually rejected distractors (or confirmed the target), and Experiment 3 again presented oculomotor search, while also tracking eye movements. We varied whether targets were initially specified by visual icons or verbally as names. We predicted that when searching for a single item, people could easily maintain a visual representation of the target in memory, resulting in minimal activation of linguistic information. When searching for multiple items, however, visual memory demands are high. In order to minimize these demands, people might use less taxing verbal codes as a memory aid during search (i.e., rehearsing target names). If so, these verbal codes may increase the potential for linguistic interference when target and distractor names share phonological overlap. All three experiments revealed effects of phonological interference, but only under high target load, and primarily when targets were specified verbally. In Experiment 4, we tested whether concurrent articulatory suppression during search might minimize verbal memory strategies and eliminate such effects of phonological interference. Phonological competition effects remained robust, however, indicating that distractor names are automatically activated under high cognitive demands and that verbal strategies are not the sole source of phonological interference in search. Meeting abstract presented at VSS 2015. PMID:26325753

  10. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x?y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  11. Locally-adaptive and memetic evolutionary pattern search algorithms.

    PubMed

    Hart, William E

    2003-01-01

    Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096

  12. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  13. Dynamic Modulation of Local Population Activity by Rhythm Phase in Human Occipital Cortex During a Visual Search Task

    PubMed Central

    Miller, Kai J.; Hermes, Dora; Honey, Christopher J.; Sharma, Mohit; Rao, Rajesh P. N.; den Nijs, Marcel; Fetz, Eberhard E.; Sejnowski, Terrence J.; Hebb, Adam O.; Ojemann, Jeffrey G.; Makeig, Scott; Leuthardt, Eric C.

    2010-01-01

    Brain rhythms are more than just passive phenomena in visual cortex. For the first time, we show that the physiology underlying brain rhythms actively suppresses and releases cortical areas on a second-to-second basis during visual processing. Furthermore, their influence is specific at the scale of individual gyri. We quantified the interaction between broadband spectral change and brain rhythms on a second-to-second basis in electrocorticographic (ECoG) measurement of brain surface potentials in five human subjects during a visual search task. Comparison of visual search epochs with a blank screen baseline revealed changes in the raw potential, the amplitude of rhythmic activity, and in the decoupled broadband spectral amplitude. We present new methods to characterize the intensity and preferred phase of coupling between broadband power and band-limited rhythms, and to estimate the magnitude of rhythm-to-broadband modulation on a trial-by-trial basis. These tools revealed numerous coupling motifs between the phase of low-frequency (?, ?, ?, ?, and ? band) rhythms and the amplitude of broadband spectral change. In the ? and ? ranges, the coupling of phase to broadband change is dynamic during visual processing, decreasing in some occipital areas and increasing in others, in a gyrally specific pattern. Finally, we demonstrate that the rhythms interact with one another across frequency ranges, and across cortical sites. PMID:21119778

  14. Clear distinction between preattentive and attentive process in schizophrenia by visual search performance.

    PubMed

    Tanaka, Goro; Mori, Shuji; Inadomi, Hiroyuki; Hamada, Yoshito; Ohta, Yasuyuki; Ozawa, Hiroki

    2007-01-15

    Visual information-processing deficits were investigated in patients with schizophrenia using visual search tasks. Subjects comprised 20 patients with schizophrenia and 20 normal subjects. Visual search tasks were modified from those used previously to reveal more distinct differences between feature and conjunction search tasks. The presentation area of items in the present study was more than double the area used in our previous study [Mori, S., Tanaka, G., Ayaka, Y., Michitsuji, S., Niwa, H., Uemura, M., Ohta, Y., 1996. Preattentive and focal attentional processes in schizophrenia: a visual search study. Schizophrenia Research 22, 69-76], and items were distributed over the area randomly in each trial to produce a certain range of locational jitter for each item across trials that prevented a matrix-like presentation of items at fixed positions [Mori, S., Tanaka, G., Ayaka, Y., Michitsuji, S., Niwa, H., Uemura, M., Ohta, Y., 1996. Preattentive and focal attentional processes in schizophrenia: a visual search study. Schizophrenia Research 22, 69-76]. The target was a red square, and distractors were red circles in the feature search task and red circles and green squares in the conjunction search task. Slopes and intercepts of a linear function relating reaction times to set size were computed. In the feature search task, slopes for both groups were almost zero. In the conjunction search task, significant differences in slopes were seen between the two groups irrespective of target presence or absence. Moreover, the slopes were approximately twice as steep during target absence as during target presence. These results indicate more definitively than the results of our previous study [Mori, S., Tanaka, G., Ayaka, Y., Michitsuji, S., Niwa, H., Uemura, M., Ohta, Y., 1996. Preattentive and focal attentional processes in schizophrenia: a visual search study. Schizophrenia Research 22, 69-76] that patients with schizophrenia have deficits in focal attentional processing, although their preattentive processing functions at a normal level. PMID:17123633

  15. Visual search is guided to categorically-defined targets.

    PubMed

    Yang, Hyejin; Zelinsky, Gregory J

    2009-07-01

    To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class. PMID:19500615

  16. Long-Term Memory Search across the Visual Brain

    PubMed Central

    Fedurco, Milan

    2012-01-01

    Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI) in humans together with molecular biology studies (animal models) aiming to understand how the retinal image gets transformed into so-called visual (retinotropic) maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing ?-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR) and ?2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF) and protein kinase PKM?. PMID:22900206

  17. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  18. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  19. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  20. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  1. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  2. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects. PMID:25262317

  3. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  4. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  5. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  6. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  7. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  8. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  9. Response Selection Modulates Visual Search within and across Dimensions

    ERIC Educational Resources Information Center

    Mortier, Karen; Theeuwes, Jan; Starreveld, Peter

    2005-01-01

    In feature search tasks, uncertainty about the dimension on which targets differ from the nontargets hampers search performance relative to a situation in which this dimension is known in advance. Typically, these cross-dimensional costs are associated with less efficient guidance of attention to the target. In the present study, participants…

  10. Performance of visual search tasks from various types of contour information.

    PubMed

    Itan, Liron; Yitzhaky, Yitzhak

    2013-03-01

    A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper. PMID:23456115

  11. Mouse Visual Neocortex Supports Multiple Stereotyped Patterns of Microcircuit Activity

    PubMed Central

    Sadovsky, Alexander J.

    2014-01-01

    Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701

  12. Visualization of flow patterning in high-speed centrifugal microfluidics

    NASA Astrophysics Data System (ADS)

    Grumann, Markus; Brenner, Thilo; Beer, Christian; Zengerle, Roland; Ducrée, Jens

    2005-02-01

    This work presents a new experimental setup for image capturing of centrifugally driven flows in disk-based microchannels rotating at high frequencies of up to 150Hz. To still achieve a micron-scale resolution, smearing effects are minimized by a microscope-mounted CCD camera featuring an extremely short minimum exposure time of 100ns, only. The image capture is controlled by a real-time PC board which sends delayed trigger signals to the CCD camera and to a stroboscopic flash upon receiving the zero-crossing signal of the rotating disk. The common delay of the trigger signals is electronically adjusted according to the spinning frequency. This appreciably improves the stability of the captured image sequences. Another computer is equipped with a fast framegrabber PC board to directly acquire the image data from the CCD camera. A maximum spatial resolution ranging between 4.5μm at rest and 10μm at a 150Hz frequency of rotation is achieved. Even at high frequencies of rotation, image smearing does not significantly impair the contrast. Using this experimental setup, the Coriolis-induced patterning of two liquid flows in 300-μm-wide channels rotating at 100Hz is visualized at a spatial resolution better than 10μm.

  13. Common Visual Pattern Discovery via Nonlinear Mean Shift Clustering.

    PubMed

    Wang, Linbo; Tang, Dong; Guo, Yanwen; Do, Minh N

    2015-12-01

    Discovering common visual patterns (CVPs) from two images is a challenging task due to the geometric and photometric deformations as well as noises and clutters. The problem is generally boiled down to recovering correspondences of local invariant features, and the conventionally addressed by graph-based quadratic optimization approaches, which often suffer from high computational cost. In this paper, we propose an efficient approach by viewing the problem from a novel perspective. In particular, we consider each CVP as a common object in two images with a group of coherently deformed local regions. A geometric space with matrix Lie group structure is constructed by stacking up transformations estimated from initially appearance-matched local interest region pairs. This is followed by a mean shift clustering stage to group together those close transformations in the space. Joining regions associated with transformations of the same group together within each input image forms two large regions sharing similar geometric configuration, which naturally leads to a CVP. To account for the non-Euclidean nature of the matrix Lie group, mean shift vectors are derived in the corresponding Lie algebra vector space with a newly provided effective distance measure. Extensive experiments on single and multiple common object discovery tasks as well as near-duplicate image retrieval verify the robustness and efficiency of the proposed approach. PMID:26415176

  14. Use of an augmented-vision device for visual search by patients with tunnel vision

    PubMed Central

    Luo, Gang; Peli, Eli

    2006-01-01

    Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136

  15. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  16. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  17. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  18. Parametric modeling of visual search efficiency in real scenes.

    PubMed

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  19. Attributes of subtle cues for facilitating visual search in augmented reality.

    PubMed

    Lu, Weiquan; Duh, Henry Been-Lirn; Feiner, Steven; Zhao, Qi

    2014-03-01

    Goal-oriented visual search is performed when a person intentionally seeks a target in the visual environment. In augmented reality (AR) environments, visual search can be facilitated by augmenting virtual cues in the person's field of view. Traditional use of explicit AR cues can potentially degrade visual search performance due to the creation of distortions in the scene. An alternative to explicit cueing, known as subtle cueing, has been proposed as a clutter-neutral method to enhance visual search in video-see-through AR. However, the effects of subtle cueing are still not well understood, and more research is required to determine the optimal methods of applying subtle cueing in AR. We performed two experiments to investigate the variables of scene clutter, subtle cue opacity, size, and shape on visual search performance. We introduce a novel method of experimentally manipulating the scene clutter variable in a natural scene while controlling for other variables. The findings provide supporting evidence for the subtlety of the cue, and show that the clutter conditions of the scene can be used both as a global classifier, as well as a local performance measure. PMID:24434221

  20. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  1. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  2. Effects of targets embedded within words in a visual search task

    PubMed Central

    Grabbe, Jeremy W.

    2014-01-01

    Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands). PMID:24855497

  3. The Nature and Process of Development in Averaged Visually Evoked Potentials: Discussion on Pattern Structure.

    ERIC Educational Resources Information Center

    Izawa, Shuji; Mizutani, Tohru

    This paper examines the development of visually evoked EEG patterns in retarded and normal subjects. The paper focuses on the averaged visually evoked potentials (AVEP) in the central and occipital regions of the brain in eyes closed and eyes open conditions. Wave pattern, amplitude, and latency are examined. The first section of the paper reviews…

  4. Computational assessment of visual search strategies in volumetric medical images.

    PubMed

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning." PMID:26759815

  5. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  6. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  7. Theta burst stimulation improves overt visual search in spatial neglect independently of attentional load.

    PubMed

    Cazzoli, Dario; Rosenthal, Clive R; Kennard, Christopher; Zito, Giuseppe A; Hopfner, Simone; Müri, René M; Nyffeler, Thomas

    2015-12-01

    Visual neglect is considerably exacerbated by increases in visual attentional load. These detrimental effects of attentional load are hypothesised to be dependent on an interplay between dysfunctional inter-hemispheric inhibitory dynamics and load-related modulation of activity in cortical areas such as the posterior parietal cortex (PPC). Continuous Theta Burst Stimulation (cTBS) over the contralesional PPC reduces neglect severity. It is unknown, however, whether such positive effects also operate in the presence of the detrimental effects of heightened attentional load. Here, we examined the effects of cTBS on neglect severity in overt visual search (i.e., with eye movements), as a function of high and low visual attentional load conditions. Performance was assessed on the basis of target detection rates and eye movements, in a computerised visual search task and in two paper-pencil tasks. cTBS significantly ameliorated target detection performance, independently of attentional load. These ameliorative effects were significantly larger in the high than the low load condition, thereby equating target detection across both conditions. Eye movement analyses revealed that the improvements were mediated by a redeployment of visual fixations to the contralesional visual field. These findings represent a substantive advance, because cTBS led to an unprecedented amelioration of overt search efficiency that was independent of visual attentional load. PMID:26547867

  8. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  9. Markov Models of Search State Patterns in a Hypertext Information Retrieval System.

    ERIC Educational Resources Information Center

    Qiu, Liwen

    1993-01-01

    Describes research that was conducted to determine the search state patterns through which users retrieve information in hypertext systems. Use of the Markov model to describe users' search behavior is discussed, and search patterns of different user groups were studied by comparing transition probability matrices. (Contains 25 references.) (LRW)

  10. Sensitive tint visualization of resonance patterns in glass plate

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ken; Izuno, Kana; Aoyanagi, Masafumi

    2012-05-01

    Photoelastic visualization can be used to establish vibrational modes of solid transparent materials having complicated longitudinal and shear strains. On the other hand, determining the sign of a stress field by the photoelastic visualization is difficult. Color visualization of resonance vibrational modes of a glass plate by using stroboscopic photoelasticity with a sensitive tint plate is described. This technique enables to determine the sign of the stress in acoustic fields.

  11. A ground-like surface facilitates visual search in chimpanzees (Pan troglodytes)

    PubMed Central

    Imura, Tomoko; Tomonaga, Masaki

    2013-01-01

    Ground surfaces play an important role in terrestrial species' locomotion and ability to manipulate objects. In humans, ground surfaces have been found to offer significant advantages in distance perception and visual-search tasks (“ground dominance”). The present study used a comparative perspective to investigate the ground-dominance effect in chimpanzees, a species that spends time both on the ground and in trees. During the experiments chimpanzees and humans engaged in a search for a cube on a computer screen; the target cube was darker than other cubes. The search items were arranged on a ground-like or ceiling-like surface, which was defined by texture gradients and shading. The findings indicate that a ground-like, but not a ceiling-like, surface facilitated the search for a difference in luminance among both chimpanzees and humans. Our findings suggest the operation of a ground-dominance effect on visual search in both species. PMID:23917381

  12. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills. PMID:23460295

  13. Predicting search time in visually cluttered scenes using the fuzzy logic approach

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Sohn, Euijung; Singh, Harpreet; Elgarhi, Abdelakrim; Nam, Deok H.

    2001-09-01

    The mean search time of observers searching for targets in visual scenes with clutter is computed using the fuzzy logic approach (FLA). The FLA is presented as a robust method for the computational of search times and/or probabilities of detection for treated vehicles. The Mamdani/Assilian and Sugeno models have been investigated and are compared. The Search_2 dataset from TNO is used to build and validate the fuzzy logic approach for target detection modeling. The input parameters are: local luminance, range, aspect, width, and wavelet edge points, and the single output is search time. The Mamdani/Assilian model gave predicted mean search times for data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method, then modeled using the FLA, and results are compared to experiment.

  14. Compensatory strategies following visual search training in patients with homonymous hemianopia: an eye movement study

    PubMed Central

    Pambakian, Alidz L. M.; Kennard, Christopher

    2010-01-01

    A total of 29 patients with homonymous visual field defects without neglect practised visual search in 20 daily sessions, over a period of 4 weeks. Patients searched for a single randomly positioned target amongst distractors displayed for 3 s. After training patients demonstrated significantly shorter reaction times for search stimuli (Pambakian et al. in J Neurol Neurosurg Psychiatry 75:1443–1448, 2004). In this study, patients achieved improved search efficiency after training by altering their oculomotor behaviour in the following ways: (1) patients directed a higher proportion of fixations into the hemispace containing the target, (2) patients were quicker to saccade into the hemifield containing the target if the initial saccade had been made into the opposite hemifield, (3) patients made fewer transitions from one hemifield to another before locating the target, (4) patients made a larger initial saccade, although the direction of the initial saccade did not change as a result of training, (5) patients acquired a larger visual lobe in their blind hemifield after training. Patients also required fewer saccades to locate the target after training reflecting improved search efficiency. All these changes were confined to the training period and maintained at follow-up. Taken together these results suggest that visual training facilitates the development of specific compensatory eye movement strategies in patients with homonymous visual field defects. PMID:20556413

  15. Supplementary eye field during visual search: Salience, cognitive control, and performance monitoring

    PubMed Central

    Purcell, Braden A.; Weigand, Pauline K.; Schall, Jeffrey D.

    2012-01-01

    How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFP) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only ~2% of SEF neurons and ~8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFP exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance. PMID:22836261

  16. The effects of distractors and spatial precues on covert visual search in macaque.

    PubMed

    Lee, Byeong-Taek; McPeek, Robert M

    2013-01-14

    Covert visual search has been studied extensively in humans, and has been used as a tool for understanding visual attention and cueing effects. In contrast, much less is known about covert search performance in monkeys, despite the fact that much of our understanding of the neural mechanisms of attention is based on these animals. In this study, we characterize the covert visual search performance of monkeys by training them to discriminate the orientation of a briefly-presented, peripheral Landolt-C target embedded within an array of distractor stimuli while maintaining fixation. We found that target discrimination performance declined steeply as the number of distractors increased when the target and distractors were of the same color, but not when the target was an odd color (color pop-out). Performance was also strongly affected by peripheral spatial precues presented before target onset, with better performance seen when the precue coincided with the target location (valid precue) than when it did not (invalid precue). Moreover, the effectiveness of valid precues was greatest when the delay between precue and target was short (?80-100 ms), and gradually declined with longer delays, consistent with a transient component to the cueing effect. Discrimination performance was also significantly affected by prior knowledge of the target location in the absence of explicit visual precues. These results demonstrate that covert visual search performance in macaques is very similar to that of humans, indicating that the macaque provides an appropriate model for understanding the neural mechanisms of covert search. PMID:23099048

  17. The Effects of Distractors and Spatial Precues on Covert Visual Search in Macaque

    PubMed Central

    Lee, Byeong-Taek; McPeek, Robert M.

    2012-01-01

    Covert visual search has been studied extensively in humans, and has been used as a tool for understanding visual attention and cueing effects. In contrast, much less is known about covert search performance in monkeys, despite the fact that much of our understanding of the neural mechanisms of attention is based on these animals. In this study, we characterize the covert visual search performance of monkeys by training them to discriminate the orientation of a briefly-presented, peripheral Landolt-C target embedded within an array of distractor stimuli while maintaining fixation. We found that target discrimination performance declined steeply as the number of distractors increased when the target and distractors were of the same color, but not when the target was an odd color (color pop-out). Performance was also strongly affected by peripheral spatial precues presented before target onset, with better performance seen when the precue coincided with the target location (valid precue) than when it did not (invalid precue). Moreover, the effectiveness of valid precues was greatest when the delay between precue and target was short (~80 to 100 ms), and gradually declined with longer delays, consistent with a transient component to the cueing effect. Discrimination performance was also significantly affected by prior knowledge of the target location in the absence of explicit visual precues. These results demonstrate that covert visual search performance in macaques is very similar to that of humans, indicating that the macaque provides an appropriate model for understanding the neural mechanisms of covert search. PMID:23099048

  18. Acute exercise and aerobic fitness influence selective attention during visual search.

    PubMed

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

  19. On the application of evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1997-02-01

    This paper presents an experimental evaluation of evolutionary pattern search algorithms (EPSAs). Our experimental evaluation of EPSAs indicates that EPSAs can achieve similar performance to EAs on challenging global optimization problems. Additionally, we describe a stopping rule for EPSAs that reliably terminated them near a stationary point of the objective function. The ability for EPSAs to reliably terminate near stationary points offers a practical advantage over other EAs, which are typically stopped by heuristic stopping rules or simple bounds on the number of iterations. Our experiments also illustrate how the rate of the crossover operator can influence the tradeoff between the number of iterations before termination and the quality of the solution found by an EPSA.

  20. Age-related interference from irrelevant distracters in visual feature search among heterogeneous distracters.

    PubMed

    Merrill, Edward C; Conners, Frances A

    2013-08-01

    We evaluated age-related variations in the influence of heterogeneous distracters during feature target. Participants in three age groups-6-year-old children, 9-year-old children, and young adults-completed three conditions of search. In a singleton search condition, participants searched for a circle among squares of the same color. In two feature mode search conditions, participants searched for a gray circle or a black circle among gray and black squares. Singleton search was performed at the same level of efficiency for all age groups. In contrast, the two feature mode search conditions yielded age-related performance differences in both conditions. Younger children exhibited a steeper slope than young adults when searching for a gray or black circle. Older children exhibited a steeper slope than young adults when searching for a gray circle but not when searching for a black circle. We concluded that these differences revealed age-related improvements in the relative abilities of adults and children to execute attentional control processes during visual search. In particular, it appears that children found it more difficult to maintain the goal of searching for a circle target than adults and were distracted by the presence of the irrelevant feature dimension of color. PMID:23708126

  1. Visualizing Document Classification: A Search Aid for the Digital Library.

    ERIC Educational Resources Information Center

    Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

    2000-01-01

    Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

  2. Visualizing Document Classification: A Search Aid for the Digital Library.

    ERIC Educational Resources Information Center

    Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

    2000-01-01

    Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

  3. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

  4. Learning from demonstrations: the role of visual search during observational learning from video and point-light models.

    PubMed

    Horn, Robert R; Williams, A Mark; Scott, Mark A

    2002-03-01

    In this study, we examined the visual search strategies used during observation of video and point-light display models. We also assessed the relative effectiveness of video and point-light models in facilitating the learning of task outcomes and movement patterns. Twenty-one female novice soccer players were divided equally into video, point-light display and no-model (control) groups. Participants chipped a soccer ball onto a target area from which radial and variable error scores were taken. Kinematic data were also recorded using an opto-electrical system. Both a pre- and post-test were performed, interspersed with three periods of acquisition and observation of the model. A retention test was completed 2 days after the post-test. There was a significant main effect for test period for outcome accuracy and variability, but observation of a model did not facilitate outcome-based learning. Participants observing the models acquired a global movement pattern that was closer to that of the model than the controls, although they did not acquire the local relations in the movement pattern, evidenced by joint range of motion and angle-angle plots. There were no significant differences in learning between the point-light display and video groups. The point-light display model group used a more selective visual search pattern than the video model group, while both groups became more selective with successive trials and observation periods. The results are discussed in the context of Newell's hierarchy of coordination and control and Scully and Newell's visual perception perspective. PMID:11999480

  5. Animal visual systems and the evolution of color patterns: sensory processing illuminates signal evolution.

    PubMed

    Endler, John A; Westcott, David A; Madden, Joah R; Robson, Tim

    2005-08-01

    Animal color pattern phenotypes evolve rapidly. What influences their evolution? Because color patterns are used in communication, selection for signal efficacy, relative to the intended receiver's visual system, may explain and predict the direction of evolution. We investigated this in bowerbirds, whose color patterns consist of plumage, bower structure, and ornaments and whose visual displays are presented under predictable visual conditions. We used data on avian vision, environmental conditions, color pattern properties, and an estimate of the bowerbird phylogeny to test hypotheses about evolutionary effects of visual processing. Different components of the color pattern evolve differently. Plumage sexual dimorphism increased and then decreased, while overall (plumage plus bower) visual contrast increased. The use of bowers allows relative crypsis of the bird but increased efficacy of the signal as a whole. Ornaments do not elaborate existing plumage features but instead are innovations (new color schemes) that increase signal efficacy. Isolation between species could be facilitated by plumage but not ornaments, because we observed character displacement only in plumage. Bowerbird color pattern evolution is at least partially predictable from the function of the visual system and from knowledge of different functions of different components of the color patterns. This provides clues to how more constrained visual signaling systems may evolve. PMID:16329248

  6. Searching Through the Hierarchy: How Level of Target Categorization Affects Visual Search

    PubMed Central

    Maxfield, Justin T.; Zelinsky, Gregory J.

    2012-01-01

    Does the same basic-level advantage commonly observed in the categorization literature also hold for targets in a search task? We answered this question by first conducting a category verification task to define a set of categories showing a standard basic-level advantage, which we then used as stimuli in a search experiment. Participants were cued with a picture preview of the target or its category name at either superordinate, basic, or subordinate levels, then shown a target-present/absent search display. Although search guidance and target verification was best using pictorial cues, the effectiveness of the categorical cues depended on the hierarchical level. Search guidance was best for the specific subordinate level cues, while target verification showed a standard basic-level advantage. These findings demonstrate different hierarchical advantages for guidance and verification in categorical search. We interpret these results as evidence for a common target representation underlying categorical search guidance and verification. PMID:23565048

  7. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  8. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  9. Analysis of microsaccades and pupil dilation reveals a common decisional origin during visual search.

    PubMed

    Privitera, Claudio M; Carney, Thom; Klein, Stanley; Aguilar, Mario

    2014-02-01

    During free viewing visual search, observers often refixate the same locations several times before and after target detection is reported with a button press. We analyzed the rate of microsaccades in the sequence of refixations made during visual search and found two important components. One related to the visual content of the region being fixated; fixations on targets generate more microsaccades and more microsaccades are generated for those targets that are more difficult to disambiguate. The other empathizes non-visual decisional processes; fixations containing the button press generate more microsaccades than those made on the same target but without the button press. Pupil dilation during the same refixations reveals a similar modulation. We inferred that generic sympathetic arousal mechanisms are part of the articulated complex of perceptual processes governing fixational eye movements. PMID:24333280

  10. Abnormal early brain responses during visual search are evident in schizophrenia but not bipolar affective disorder.

    PubMed

    VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R

    2016-01-01

    People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. PMID:26603466

  11. The role of highlighting in visual search through maps.

    PubMed

    Wickens, Christopher D; Alexander, Amy L; Ambinder, Michael S; Martens, Marieke

    2004-01-01

    Two experiments were conducted in which participants performed a vehicle dispatching task. The intensity of one information source (vehicles in Experiment 1, destinations in Experiment 2) was varied to examine the effects of salience and discrimination on both searching for and processing the information in a cluttered display. Response times were recorded for questions either requiring focused attention on or divided attention between the different information domains in the map. The results of the present experiments indicate that it is possible to declutter a display without erasing any information. By 'lowlighting' one information domain and keeping the other domain at a fairly high intensity level, dividing attention between the information sources is optimal, as is focusing attention on either of the information domains exclusively. These results are discussed in conjunction with a computational model of confusion and salience which serves to predict search and integration performance in a cluttered display with separate domains of information displayed at different intensities. PMID:15559110

  12. Two visual observations of relevance to the search for optical counterparts of gamma-ray sources

    NASA Astrophysics Data System (ADS)

    Warner, B.

    1986-05-01

    The authors draw attention to a visual observation of a brief flash from ? Lyrae, observed by Heis in 1850, which resembles the optical burst detected electronically by Wdowiak and Clifton (1985) from ? Cam in 1969. Visual observation by the author of a second magnitude flash of very short duration is shown to originate from planar reflection from a very distant satellite. Such flashes will contribute to the "noise" in all-sky searches for optical counterparts of ?-ray bursters.

  13. Quantifying the performance limits of human saccadic targeting during visual search

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Beutter, B. R.; Stone, L. S.

    2001-01-01

    In previous studies of saccadic targeting, the issue how visually guided saccades to unambiguous targets are programmed and executed has been examined. These studies have found different degrees of guidance for saccades depending on the task and task difficulty. In this study, we use ideal-observer analysis to estimate the visual information used for the first saccade during a search for a target disk in noise. We quantitatively compare the performance of the first saccadic decision to that of the ideal observer (ie absolute efficiency of the first saccade) and to that of the associated final perceptual decision at the end of the search (ie relative efficiency of the first saccade). Our results show, first, that at all levels of salience tested, the first saccade is based on visual information from the stimulus display, and its highest absolute efficiency is approximately 20%. Second, the efficiency of the first saccade is lower than that of the final perceptual decision after active search (with eye movements) and has a minimum relative efficiency of 19% at the lowest level of saliency investigated. Third, we found that requiring observers to maintain central fixation (no saccades allowed) decreased the absolute efficiency of their perceptual decision by up to a factor of two, but that the magnitude of this effect depended on target salience. Our results demonstrate that ideal-observer analysis can be extended to measure the visual information mediating saccadic target-selection decisions during visual search, which enables direct comparison of saccadic and perceptual efficiencies.

  14. Age-Related Preservation of Top-Down Control over Distraction in Visual Search

    PubMed Central

    Costello, Matthew C.; Madden, David J.; Shepler, Anne M.; Mitroff, Stephen R.; Leber, Andrew B.

    2009-01-01

    Visual search studies have demonstrated that older adults can have preserved or even increased top-down control over distraction. However, the results are mixed as to the extent of this age-related preservation. The present experiment assesses group differences in younger and older adults during visual search, with a task featuring two conditions offering varying degrees of top-down control over distraction. After controlling for generalized slowing, the analyses revealed that the age groups were equally capable of utilizing top-down control to minimize distraction. Furthermore, for both age groups, the distraction effect was manifested in a sustained manner across the reaction time distribution. PMID:20544447

  15. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  16. Visual search in typically developing toddlers and toddlers with Fragile X or Williams syndrome.

    PubMed

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-02-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between successively touched items, accuracy and error types revealed changes in 2- and 3-year-olds' vulnerability to manipulations of the search display. Experiment 2 examined search performance by toddlers with Fragile X syndrome (FXS) or Williams syndrome (WS). Both of these groups produced equivalent mean time and distance per touch as typically developing toddlers matched by chronological or mental age; but both produced a larger number of errors. Toddlers with WS confused distractors with targets more than the other groups; while toddlers with FXS perseverated on previously found targets. These findings provide information on how visual search typically develops in toddlers, and reveal distinct search deficits for atypically developing toddlers. PMID:15323123

  17. Raster-based visualization of abnormal association patterns in marine environments

    NASA Astrophysics Data System (ADS)

    Li, Lianwei; Xue, Cunjin; Liu, Jian; Wang, Zhenjie; Qin, Lijuan

    2014-01-01

    The visualization in a single view of abnormal association patterns obtained from mining lengthy marine raster datasets presents a great challenge for traditional visualization techniques. On the basis of the representation model of marine abnormal association patterns, an interactive visualization framework is designed with three complementary components: three-dimensional pie charts, two-dimensional variation maps, and triple-layer mosaics; the details of their implementation steps are given. The combination of the three components allows users to request visualization of the association patterns from global to detailed scales. The three-dimensional pie chart component visualizes the locations where more marine environmental parameters are interrelated and shows the parameters that are involved. The two-dimensional variation map component gives the spatial distribution of interactions between each marine environmental parameter and other parameters. The triple-layer mosaics component displays the detailed association patterns at locations specified by the users. Finally, the effectiveness and the efficiency of the proposed visualization framework are demonstrated using a prototype system with three visualization interfaces based on ArcEngine 10.0, and the abnormal association patterns among marine environmental parameters in the Pacific Ocean are visualized.

  18. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers. PMID:25678271

  19. Aging and performance on an everyday-based visual search task.

    PubMed

    Potter, Lauren M; Grealy, Madeleine A; Elliott, Mark A; Andrés, Pilar

    2012-07-01

    Research on aging and visual search often requires older people to search computer screens for target letters or numbers. The aim of this experiment was to investigate age-related differences using an everyday-based visual search task in a large participant sample (n=261) aged 20-88 years. Our results show that: (1) old-old adults have more difficulty with triple conjunction searches with one highly distinctive feature compared to young-old and younger adults; (2) age-related declines in conjunction searches emerge in middle age then progress throughout older age; (3) age-related declines are evident in feature searches on target absent trials, as older people seem to exhaustively and serially search the whole display to determine a target's absence. Together, these findings suggest that declines emerge in middle age then progress throughout older age in feature integration, guided search, perceptual grouping and/or spreading suppression processes. Discussed are implications for enhancing everyday functioning throughout adulthood. PMID:22664318

  20. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  1. The right hemisphere is dominant in organization of visual search-A study in stroke patients.

    PubMed

    Ten Brink, Antonia F; Matthijs Biesbroek, J; Kuijf, Hugo J; Van der Stigchel, Stefan; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2016-05-01

    Cancellation tasks are widely used for diagnosis of lateralized attentional deficits in stroke patients. A disorganized fashion of target cancellation has been hypothesized to reflect disturbed spatial exploration. In the current study we aimed to examine which lesion locations result in disorganized visual search during cancellation tasks, in order to determine which brain areas are involved in search organization. A computerized shape cancellation task was administered in 78 stroke patients. As an index for search organization, the amount of intersections of paths between consecutive crossed targets was computed (i.e., intersections rate). This measure is known to accurately depict disorganized visual search in a stroke population. Ischemic lesions were delineated on CT or MRI images. Assumption-free voxel-based lesion-symptom mapping and region of interest-based analyses were used to determine the grey and white matter anatomical correlates of the intersections rate as a continuous measure. The right lateral occipital cortex, superior parietal lobule, postcentral gyrus, superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior longitudinal fasciculus, first branch of the superior longitudinal fasciculus (SLF I), and the inferior fronto-occipital fasciculus, were related to search organization. To conclude, a clear right hemispheric dominance for search organization was revealed. Further, the correlates of disorganized search overlap with regions that have previously been associated with conjunctive search and spatial working memory. This suggests that disorganized visual search is caused by disturbed spatial processes, rather than deficits in high level executive function or planning, which would be expected to be more related to frontal regions. PMID:26876010

  2. Keep on rolling: Visual search asymmetries in 3D scenes with motion-defined targets.

    PubMed

    Cain, Matthew; Josephs, Emilie; Wolfe, Jeremy

    2015-09-01

    Many simple feature searches are asymmetric; that is, finding a target defined by feature value A among distractors with value B is more efficient than finding B among A. In motion, for example, finding moving targets among stationary distractors is more efficient than finding stationary among moving (but see Rosenholtz, 2001). Most previous work involves simple motions in the 2D plane including manipulations of speed (Ivry & Cohen, 1992), rotation (Thornton & Gilden, 2001), expansion, contraction (Takeuchi, 1997), and randomness (Horowitz, et al., 2007). Here, we extend this work to environments with depth, using objects rotating around different axes in 3D environments. Observers searched for targets that were "rolling" about a horizontal axis among distractors "spinning" about a vertical axis or vice versa. Objects appeared to rest on a slanted plane and did not translate along this surface. Set sizes were 4, 8, and 12. Search for rolling targets among spinning distractors was markedly more efficient than search for spinning among rolling (RT x set size slopes: 12 vs 36 msec/item). Half of observers had target-present slopes < 10 msec/item, suggesting that "rolling" may behave like a fundamental feature such as color or orientation. More broadly, these results suggest that more features that guide attention may be waiting to be discovered as we move beyond simple stimuli in the frontal plane. Horowitz, et al. (2007). Visual search for type of motion is based on simple motion primitives. Perception, 36, 1624-1634. Ivry & Cohen (1992). Asymmetry in visual search for targets defined by differences in movement speed. JEP:HPP, 18, 1045-1057. Rosenholtz (2001). Search asymmetries? What search asymmetries? Perception and Psychophysics, 63, 476-489. Takeuchi (1997). Visual search of expansion and contraction. Vision Research, 37(15), 2083-2090. Thornton & Gilden, (2001). Attentional limitations in the sensing of motion direction. Cognitive Psychology, 43, 23-52. Meeting abstract presented at VSS 2015. PMID:26327053

  3. Failures of perception in the low-prevalence effect: Evidence from active and passive visual search.

    PubMed

    Hout, Michael C; Walenchok, Stephen C; Goldinger, Stephen D; Wolfe, Jeremy M

    2015-08-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%-34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073

  4. Contrasting vertical and horizontal representations of affect in emotional visual search.

    PubMed

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings. PMID:26106061

  5. Searching for a major locus for male pattern baldness (MPB)

    SciTech Connect

    Anker, R.; Eisen, A.Z.; Donis-Keller, H.

    1994-09-01

    Male pattern baldness (MPB) is a common trait in post-pubertal males. Approximately 50% of adult males present some degree of MPB by age 50. According to the classification provided by Hamilton in 1951 and modified by Norwood in 1975, the trait itself is a continuum that ranges from mild (Type I) to severe (Type VII) cases. In addition, there is extensive variability for the age of onset. The role of androgens in allowing the expression of this trait in males has been well established. This phenotype is uncommonly expressed in females. The high prevalence of the trait, the distribution of MPB as a continuous trait, and several non-allelic mutations identified in the mouse capable of affecting hair pattern, suggest that MPB is genetically heterogeneous. In order to reduce the probability of multiple non-allelic MPB genes within a pedigree, we selected 9 families in which MPB appears to segregate exclusively through the paternal lineage as compared to bilineal pedigrees. There are 32 males expressing this phenotype and females are treated as phenotype unknown. In general, affected individuals expressed the trait before 30 years of age with a severity of at least Type III or IV. We assumed an autosomal dominant model, with a gene frequency of 1/20 for the affected allele, and 90% penetrance. Simulation studies using the SLINK program with these pedigrees showed that these families would be sufficient to detect linkage under the assumption of a single major locus. If heterogeneity is present, the current resource does not have sufficient power to detect linkage at a statistically significant level, although candidate regions of the genome could be identified for further studies with additional pedigrees. Using 53 highly informative microsatellite markers, and a subset of 7 families, we have screened 30% of the genome. This search included several regions where candidate genes for MPB are located.

  6. Visual Intelligence: Using the Deep Patterns of Visual Language to Build Cognitive Skills

    ERIC Educational Resources Information Center

    Sibbet, David

    2008-01-01

    Thirty years of work as a graphic facilitator listening visually to people in every kind of organization has convinced the author that visual intelligence is a key to navigating an information economy rich with multimedia. He also believes that theory and disciplines developed by practitioners in this new field hold special promise for educators…

  7. Visual Intelligence: Using the Deep Patterns of Visual Language to Build Cognitive Skills

    ERIC Educational Resources Information Center

    Sibbet, David

    2008-01-01

    Thirty years of work as a graphic facilitator listening visually to people in every kind of organization has convinced the author that visual intelligence is a key to navigating an information economy rich with multimedia. He also believes that theory and disciplines developed by practitioners in this new field hold special promise for educators…

  8. The Mouse Model of Down Syndrome Ts65Dn Presents Visual Deficits as Assessed by Pattern Visual Evoked Potentials

    PubMed Central

    Scott-McKean, Jonah Jacob; Chang, Bo; Hurd, Ronald E.; Nusinowitz, Steven; Schmidt, Cecilia; Davisson, Muriel T.

    2010-01-01

    Purpose. The Ts65Dn mouse is the most complete widely available animal model of Down syndrome (DS). Quantitative information was generated about visual function in the Ts65Dn mouse by investigating their visual capabilities by means of electroretinography (ERG) and patterned visual evoked potentials (pVEPs). Methods. pVEPs were recorded directly from specific regions of the binocular visual cortex of anesthetized mice in response to horizontal sinusoidal gratings of different spatial frequency, contrast, and luminance generated by a specialized video card and presented on a 21-in. computer display suitably linearized by gamma correction. Results. ERG assessments indicated no significant deficit in retinal physiology in Ts65Dn mice compared with euploid control mice. The Ts65Dn mice were found to exhibit deficits in luminance threshold, spatial resolution, and contrast threshold, compared with the euploid control mice. The behavioral counterparts of these parameters are luminance sensitivity, visual acuity, and the inverse of contrast sensitivity, respectively. Conclusions. DS includes various phenotypes associated with the visual system, including deficits in visual acuity, accommodation, and contrast sensitivity. The present study provides electrophysiological evidence of visual deficits in Ts65Dn mice that are similar to those reported in persons with DS. These findings strengthen the role of the Ts65Dn mouse as a model for DS. Also, given the historical assumption of integrity of the visual system in most behavioral assessments of Ts65Dn mice, such as the hidden-platform component of the Morris water maze, the visual deficits described herein may represent a significant confounding factor in the interpretation of results from such experiments. PMID:20130276

  9. RF antenna-pattern visual aids for field use

    NASA Technical Reports Server (NTRS)

    Williams, J. H.

    1973-01-01

    Series of plots must be made of antenna pattern on polar-coordinate sheet depicting vertical planes. Separate sheets are plotted depicting antenna patterns in vertical plane at azimuth positions. After all polar plots are drawn, they are labeled according to their azimuthal positions. Transparencies are then stiffened with regular wire, cardboard, or molded plastic.

  10. Use Patterns of Visual Cues in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be…

  11. Pattern search algorithms for mixed variable general constrained optimization problems

    NASA Astrophysics Data System (ADS)

    Abramson, Mark Aaron

    A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. The Audet-Dennis Generalized Pattern Search (GPS) algorithm for bound constrained mixed variable optimization problems is extended to problems with general nonlinear constraints by incorporating a filter, in which new iterates are accepted whenever they decrease the incumbent objective function value or constraint violation function value. Additionally, the algorithm can exploit any available derivative information (or rough approximation thereof) to speed convergence without sacrificing the flexibility often employed by GPS methods to find better local optima. In generalizing existing GPS algorithms, the new theoretical convergence results presented here reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are made, a hierarchy of theoretical convergence results is given, in which the assumptions dictate what can be proved about certain limit points of the algorithm. A new Matlab(c) software package was developed to implement these algorithms. Numerical results are provided for several nonlinear optimization problems from the CUTE test set, as well as a difficult nonlinearly constrained mixed variable optimization problem in the design of a load-bearing thermal insulation system used in cryogenic applications.

  12. Assessing the benefits of stereoscopic displays to visual search: methodology and initial findings

    NASA Astrophysics Data System (ADS)

    Godwin, Hayward J.; Holliman, Nick S.; Menneer, Tamaryn; Liversedge, Simon P.; Cave, Kyle R.; Donnelly, Nicholas

    2015-03-01

    Visual search is a task that is carried out in a number of important security and health related scenarios (e.g., X-ray baggage screening, radiography). With recent and ongoing developments in the technology available to present images to observers in stereoscopic depth, there has been increasing interest in assessing whether depth information can be used in complex search tasks to improve search performance. Here we outline the methodology that we developed, along with both software and hardware information, in order to assess visual search performance in complex, overlapping stimuli that also contained depth information. In doing so, our goal is to foster further research along these lines in the future. We also provide an overview with initial results of the experiments that we have conducted involving participants searching stimuli that contain overlapping objects presented on different depth planes to one another. Thus far, we have found that depth information does improve the speed (but not accuracy) of search, but only when the stimuli are highly complex and contain a significant degree of overlap. Depth information may therefore aid real-world search tasks that involve the examination of complex, overlapping stimuli.

  13. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  14. How You Move Is What You See: Action Planning Biases Selection in Visual Search

    ERIC Educational Resources Information Center

    Wykowska, Agnieszka; Schubo, Anna; Hommel, Bernhard

    2009-01-01

    Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection…

  15. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  16. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  17. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  18. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  19. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  20. Can a short nap and bright light function as implicit learning and visual search enhancers?

    PubMed

    Kaida, Kosuke; Takeda, Yuji; Tsuzuki, Kazuyo

    2012-01-01

    The present study examined effects of a short nap (20 min) and/or bright light (2000 lux) on visual search and implicit learning in a contextual cueing task. Fifteen participants performed a contextual cueing task twice a day (1200-1330 h and 1430-1600 h) and scored subjective sleepiness before and after a short afternoon nap or a break period. Participants served a total of four experimental conditions (control, short nap, bright light and short nap with bright light). During the second task, bright light treatment (BLT) was applied in the two of the four conditions. Participants performed both tasks in a dimly lit environment except during the light treatment. Results showed that a short nap reduced subjective sleepiness and improved visual search time, but it did not affect implicit learning. Bright light reduced subjective sleepiness. A short nap in the afternoon could be a countermeasure against sleepiness and an enhancer for visual search. Practitioner Summary: The study examined effects of a short afternoon nap (20 min) and/or bright light (2000 lux) on visual search and implicit learning. A short nap is a powerful countermeasure against sleepiness compared to bright light exposure in the afternoon. PMID:22928470

  1. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  2. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  3. Predicting search time in visual scenes using the fuzzy logic approach

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Sohn, Eui J.; Singh, Harpreet; Elgarhi, Abdelakrim

    1999-07-01

    The mean search time of observers looking for targets in visual scenes with clutter is computed using the Fuzzy Logic Approach (FLA). The FLA is presented by the authors as a robust method for the computation of search times and or probabilities of detection for signature management decisions. The Mamdani/Assilian and Sugeno models have been investigated and are compared. A 44 image data set from TNO is used to build and validate the fuzzy logic model for detection. The input parameters are the: local luminance, range, aspect, width, wavelet edge points and the single output is search time. The Mamdani/Assilian model gave predicted mean search times from data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method then modeled using the FLA and results are compared to experiment.

  4. Display format and highlight validity effects on search performance using complex visual displays

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim; O'Brien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  5. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf.

    PubMed

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z; Zhang, Fan; Gonçalves, Óscar F; Fang, Fang; Bi, Yanchao

    2015-11-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus-a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  6. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  7. Earthdata Search: Combining New Services and Technologies for Earth Science Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.

    2014-12-01

    A host of new services are revolutionizing discovery, visualization, and access of NASA's Earth science data holdings. At the same time, web browsers have become far more capable and open source libraries have grown to take advantage of these capabilities. Earthdata Search is a web application which combines modern browser features with the latest Earthdata services from NASA to produce a cutting-edge search and access client with features far beyond what was possible only a couple of years ago. Earthdata Search provides data discovery through the Common Metadata Repository (CMR), which provides a high-speed REST API for searching across hundreds of millions of data granules using temporal, spatial, and other constraints. It produces data visualizations by combining CMR data with Global Imagery Browse Services (GIBS) image tiles. Earthdata Search renders its visualizations using custom plugins built on Leaflet.js, a lightweight mobile-friendly open source web mapping library. The client further features an SVG-based interactive timeline view of search results. For data access, Earthdata Search provides easy temporal and spatial subsetting as well as format conversion by making use of OPeNDAP. While the client hopes to drive adoption of these services and standards, it provides fallback behavior for working with data that has not yet adopted them. This allows the client to remain on the cutting-edge of service offerings while still boasting a catalog containing thousands of data collections. In this session, we will walk through Earthdata Search and explain how it incorporates these new technologies and service offerings.

  8. Effect of pattern complexity on the visual span for Chinese and alphabet characters.

    PubMed

    Wang, Hui; He, Xuanzi; Legge, Gordon E

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity--lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  9. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean number of hits on Tuesdays and Wednesdays, whereas pop singer Justin Bieber had the most hits on Tuesdays. Both these tracked search queries also showed significantly lower numbers on Saturdays (P<.001). Conclusions Our study supports prior studies finding an increase in health information searching at the beginning of the workweek. However, we also found a similar pattern for 2 randomly chosen nonhealth-related terms, which may suggest that the search pattern is not unique to health-related searches. The results are potentially relevant beyond the field of health and our preliminary findings need to be further explored in future studies involving a broader range of nonhealth-related searches. PMID:26693859

  10. Cortical dynamics of contextually cued attentive visual learning and search: spatial and object evidence accumulation.

    PubMed

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-10-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. PMID:21038974

  11. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations. PMID:26872100

  12. VisualRank: applying PageRank to large-scale image search.

    PubMed

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines. PMID:18787237

  13. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  14. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human–computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain–computer interfacing technology available for human–computer interaction applications.

  15. Visual cluster analysis and pattern recognition template and methods

    SciTech Connect

    Osbourn, G.C.; Martinez, R.F.

    1993-12-31

    This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  16. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  17. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  18. Studying visual search using systems factorial methodology with target–distractor similarity as the factor

    PubMed Central

    Fifi?, Mario; Townsend, James T.; Eidels, Ami

    2008-01-01

    Systems factorial technology (SFT) is a theory-driven set of methodologies oriented toward identification of basic mechanisms, such as parallel versus serial processing, of perception and cognition. Studies employing SFT in visual search with small display sizes have repeatedly shown decisive evidence for parallel processing. The first strong evidence for serial processing was recently found in short-term memory search, using target–distractor (T–D) similarity as a key experimental variable (Townsend & Fifi?, 2004). One of the major goals of the present study was to employ T–D similarity in visual search to learn whether this mode of manipulating processing speed would affect the parallel versus serial issue in that domain. The result was a surprising and regular departure from ordinary parallel or serial processing. The most plausible account at present relies on the notion of positively interacting parallel channels. PMID:18556921

  19. Feature-based attention in the frontal eye field and area V4 during visual search.

    PubMed

    Zhou, Huihui; Desimone, Robert

    2011-06-23

    When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target. PMID:21689605

  20. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  1. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  2. Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

    2014-06-01

    Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

  3. Learning From Data: Recognizing Glaucomatous Defect Patterns and Detecting Progression From Visual Field Measurements

    PubMed Central

    Yousefi, Siamak; Goldbaum, Michael H.; Balasubramanian, Madhusudhanan; Medeiros, Felipe A.; Zangwill, Linda M.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Weinreb, Robert N.

    2014-01-01

    A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods. PMID:24710816

  4. Binocular advantage for prehension movements performed in visually enriched environments requiring visual search

    PubMed Central

    Gnanaseelan, Roshani; Gonzalez, Dave A.; Niechwiej-Szwedo, Ewa

    2014-01-01

    The purpose of this study was to examine the role of binocular vision during a prehension task performed in a visually enriched environment where the target object was surrounded by distractors/obstacles. Fifteen adults reached and grasped for a cylindrical peg while eye movements and upper limb kinematics were recorded. The complexity of the visual environment was manipulated by varying the number of distractors and by varying the saliency of the target. Gaze behavior (i.e., the latency of the primary gaze shift and frequency of gaze shifts prior to reach initiation) was comparable between viewing conditions. In contrast, a binocular advantage was evident in performance accuracy. Specifically, participants picked up the wrong object twice as often during monocular viewing when the complexity of the environment increased. Reach performance was more efficient during binocular viewing, which was demonstrated by shorter reach reaction time and overall movement time. Reaching movements during the approach phase had higher peak velocity during binocular viewing. During monocular viewing reach trajectories exhibited a direction bias during the acceleration phase, which was leftward during left eye viewing and rightward during right eye viewing. This bias can be explained by the presence of esophoria in the covered eye. The grasping interval was also extended by ~20% during monocular viewing; however, the duration of the return phase after the target was picked up was comparable across viewing conditions. In conclusion, binocular vision provides important input for planning and execution of prehension movements in visually enriched environments. Binocular advantage was evident, regardless of set size or target saliency, indicating that adults plan their movements more cautiously during monocular viewing, even in relatively simple environments with a highly salient target. Nevertheless, in visually-normal adults monocular input provides sufficient information to engage in online control to correct the initial errors in movement planning. PMID:25506323

  5. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes. PMID:26892010

  6. Through The Looking (Google) Glass: Attentional Costs in Distracted Visual Search.

    PubMed

    Lewis, Joanna; Neider, Mark

    2015-01-01

    Devices using a Heads-Up-Display (HUD), such as Google Glass (GG), provide users with a wide range of informational content, often while that user is engaged in a concurrent task. It is unclear, however, how such information might interfere with attentional processes. Here, we evaluated how a secondary task load presented on GG affects selective attention mechanisms. Participants completed a visual search task for an oriented T target among L distractors (50 or 80 set size) on a computer screen. Our primary manipulation was the nature of a secondary task via the use (or non-use) of GG. More specifically, participants performed the search task while they either did not wear GG (control condition), wore GG with no information presented on it, or wore the GG with a word presented on it. Additionally, we also manipulated the instructions given to the participant regarding the relevance of the information presented on the GG (e.g., useful, irrelevant, or ignore). When words were presented on the GG, we tested for recognition memory with a surprise recognition task composed of 50% new and old words following the visual search task. We found an RT cost during visual search associated with simply wearing GG compared to when participants searched without wearing GG (~258ms) and when secondary information was presented as compared to wearing GG with no information presented (~225ms). We found no interaction of search set size and GG condition, nor was there and effect of GG condition on search accuracy. Recognition memory was significantly above chance in all instruction conditions; even when participants were instructed that information presented on the GG should be ignored, there was still evidence that the information was processed. Overall, our findings suggest that information presented on HUDs, such as GG, may induce performance costs on concurrent tasks requiring selective attention. Meeting abstract presented at VSS 2015. PMID:26327048

  7. Increased Vulnerability to Pattern-Related Visual Stress in Myalgic Encephalomyelitis.

    PubMed

    Wilson, Rachel L; Paterson, Kevin B; Hutchinson, Claire V

    2015-12-01

    The objective of this study was to determine vulnerability to pattern-related visual stress in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS). A total of 20 ME/CFS patients and 20 matched (age, gender) controls were recruited to the study. Pattern-related visual stress was determined using the Pattern Glare Test. Participants viewed three patterns, the spatial frequencies (SF) of which were 0.3 (low-SF), 2.3 (mid-SF), and 9.4 (high-SF) cycles per degree (c/deg). They reported the number of distortions they experienced when viewing each pattern. ME/CFS patients exhibited significantly higher pattern glare scores than controls for the mid-SF pattern. Mid-high SF differences were also significantly higher in patients than controls. These findings provide evidence of altered visual perception in ME/CFS. Pattern-related visual stress may represent an identifiable clinical feature of ME/CFS that will prove useful in its diagnosis. However, further research is required to establish if these symptoms reflect ME/CFS-related changes in the functioning of sensory neural pathways. PMID:26562880

  8. A modified mirror projection visual evoked potential stimulator for presenting patterns in different orientations.

    PubMed

    Taylor, P K; Wynn-Williams, G M

    1986-07-01

    Modifications to a standard mirror projection visual evoked potential stimulator are described to enable projection of patterns in varying orientations. The galvanometer-mirror assembly is mounted on an arm which can be rotated through 90 degrees. This enables patterns in any orientation to be deflected perpendicular to their axes. PMID:2424725

  9. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  10. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  11. Does focused endogenous attention prevent attentional capture in pop-out visual search?

    PubMed Central

    Seiss, Ellen; Kiss, Monika; Eimer, Martin

    2009-01-01

    To investigate whether salient visual singletons capture attention when they appear outside the current endogenous attentional focus, we measured the N2pc component as a marker of attentional capture in a visual search task where target or nontarget singletons were presented at locations previously cued as task-relevant, or in the uncued irrelevant hemifield. In two experiments, targets were either defined by colour, or by a combination of colour and shape. The N2pc was elicited both for attended singletons and for singletons on the uncued side, demonstrating that focused endogenous attention cannot prevent attentional capture by salient unattended visual events. However, N2pc amplitudes were larger for attended and unattended singletons that shared features with the current target, suggesting that top-down task sets modulate the capacity of visual singletons to capture attention both within and outside the current attentional focus. PMID:19473304

  12. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  13. Visual search performance of patients with vision impairment: Effect of JPEG image enhancement

    PubMed Central

    Luo, Gang; Satgunam, PremNandhini; Peli, Eli

    2012-01-01

    Purpose To measure natural image search performance in patients with central vision impairment. To evaluate the performance effect for a JPEG based image enhancement technique using the visual search task. Method 150 JPEG images were presented on a touch screen monitor in either an enhanced or original version to 19 patients (visual acuity 0.4 to 1.2 logMAR, 6/15 to 6/90, 20/50 to 20/300) and 7 normally sighted controls (visual acuity ?0.12 to 0.1 logMAR, 6/4.5 to 6/7.5, 20/15 to 20/25). Each image fell into one of three categories: faces, indoors, and collections. The enhancement was realized by moderately boosting a mid-range spatial frequency band in the discrete cosine transform (DCT) coefficients of the image luminance component. Participants pointed to an object in a picture that matched a given target displayed at the upper-left corner of the monitor. Search performance was quantified by the percentage of correct responses, the median search time of correct responses, and an “integrated performance” measure – the area under the curve of cumulative correct response rate over search time. Results Patients were able to perform the search tasks but their performance was substantially worse than the controls. Search performances for the 3 image categories were significantly different (p?0.001) for all the participants, with searching for faces being the most difficult. When search time and correct response were analyzed separately, the effect of enhancement led to increase in one measure but decrease in another for many patients. Using the integrated performance, it was found that search performance declined with decrease in acuity (p=0.005). An improvement with enhancement was found mainly for the patients whose acuity ranged from 0.4 to 0.8 logMAR (6/15 to 6/38, 20/50 to 20/125). Enhancement conferred a small but significant improvement in integrated performance for indoor and collection images (p=0.025) in the patients. Conclusion Search performance for natural images can be measured in patients with impaired vision to evaluate the effect of image enhancement. Patients with moderate vision loss might benefit from the moderate level of enhancement used here. PMID:22540926

  14. The NLP Swish Pattern: An Innovative Visualizing Technique.

    ERIC Educational Resources Information Center

    Masters, Betsy J.; And Others

    1991-01-01

    Describes swish pattern, one of many innovative therapeutic interventions that developers of neurolinguistic programing (NLP) have contributed to counseling profession. Presents brief overview of NLP followed by an explanation of the basic theory and expected outcomes of the swish. Presents description of the intervention process and case studies…

  15. PATTERN REVERSAL VISUAL EVOKED POTENTIALS IN AWAKE RATS

    EPA Science Inventory

    A method for recording pattern reversal evoked potentials (PREPs) from awake restrained rats has been developed. The procedure of Onofrj et al. was modified to eliminate the need for anesthetic, thereby avoiding possible interactions of the anesthetic with other manipulations of ...

  16. Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data.

    PubMed

    Bach, Benjamin; Shi, Conglei; Heulot, Nicolas; Madhyastha, Tara; Grabowski, Tom; Dragicevic, Pierre

    2016-01-01

    We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets. PMID:26529718

  17. Task-dependent modulation of word processing mechanisms during modified visual search tasks.

    PubMed

    Dampure, Julien; Benraiss, Abdelrhani; Vibert, Nicolas

    2016-06-01

    During visual search for words, the impact of the visual and semantic features of words varies as a function of the search task. This event-related potential (ERP) study focused on the way these features of words are used to detect similarities between the distractor words that are glanced at and the target word, as well as to then reject the distractor words. The participants had to search for a target word that was either given literally or defined by a semantic clue among words presented sequentially. The distractor words included words that resembled the target and words that were semantically related to the target. The P2a component was the first component to be modulated by the visual and/or semantic similarity of distractors to the target word, and these modulations varied according to the task. The same held true for the later N300 and N400 components, which confirms that, depending on the task, distinct processing pathways were sensitized through attentional modulation. Hence, the process that matches what is perceived with the target acts during the first 200 ms after word presentation, and both early detection and late rejection processes of words depend on the search task and on the representation of the target stored in memory. PMID:26176489

  18. Prevalence learning and decision making in a visual search task: an equivalent ideal observer approach

    NASA Astrophysics Data System (ADS)

    He, Xin; Samuelson, Frank; Zeng, Rongping; Sahiner, Berkman

    2015-03-01

    Research studies have observed an influence of target prevalence on observer performance for visual search tasks. The goal of this work is to develop models for prevalence effects on visual search. In a recent study by Wolfe et. al, a large scale observer study was conducted to understand the effects of varying target prevalence on visual search. Particularly, a total of 12 observers were recruited to perform 1000 trials of simulated baggage search as target prevalence varied sinusoidally from high to low and back to high. We attempted to model observers' behavior in prevalence learning and decision making. We modeled the observer as an equivalent ideal observer (EIO) with a prior belief of the signal prevalence. The use of EIO allows the application of ideal observer mathematics to characterize real observers' performance reading real-life images. For every given new image, the observer updates the belief on prevalence and adjusts his/her decision threshold according to utility theory. The model results agree well with the experimental results from the Wolfe study. The proposed models allow theoretical insights into observer behavior in learning prevalence and adjusting their decision threshold.

  19. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  20. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  1. On the Role of Consonants and Vowels in Visual-Word Processing: Evidence with a Letter Search Paradigm

    ERIC Educational Resources Information Center

    Acha, Joana; Perea, Manuel

    2010-01-01

    Prior research has shown that the search function in the visual letter search task may reflect the regularities of the orthographic structure of a given script. In the present experiment, we examined whether the search function of letter detection was sensitive to consonant-vowel status of a pre-cued letter. Participants had to detect the…

  2. The role of selective attention during visual search using random dot motion stimuli.

    PubMed

    Bolandnazar, Zeinab; Lennarz, Bianca; Mirpour, Koorosh; Bisley, James

    2015-01-01

    Finding objects among distractors is an essential everyday skill, which is often tested with visual search tasks using static items in the display. Although these kinds of displays are ideal for studying search behavior, the neural encoding of the visual stimuli can occur rapidly, which limits the analysis that can be done on the accumulation of evidence. Searching for a target among multiple random dot motion (RDM) stimuli should allow us to study the effect of attention on the accumulation of information during visual search. We trained an animal to make a saccade to a RDM stimulus with motion in a particular direction (the target). The animal began the task by fixating a central square. After a short delay, it changed to a dotted hollow square and one, two or four RDM stimuli appeared equally spaced in the periphery for 700 ms. The animal was rewarded for looking at the target, if present, or for maintaining fixation if the target was absent from the display. In the spread attention condition, all the dots in the RDM stimuli were the same color. In the focused attention condition, the color of the fixation square and the dotted hollow square matched the color of the dots in one RDM stimulus, which was a 100% valid cue. We varied the coherence of the RDM stimuli for each condition from 65 to 100% (100 ms limited lifetime). At the lower coherences, there were strong effects of set size and attention condition on both performance and reaction time. Our data show that using a RDM visual search task allows us to clearly illustrate the role of attention in the accumulation of perceptual evidence, which increases response accuracy and shortens reaction time. Meeting abstract presented at VSS 2015. PMID:26327054

  3. Visual Search with Image Modification in Age-Related Macular Degeneration

    PubMed Central

    Wiecek, Emily; Jackson, Mary Lou; Dakin, Steven C.; Bex, Peter

    2012-01-01

    Purpose. AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients' eye movements as a quantitative and functional measure of the efficacy of image modification. Methods. Eye movements of 17 patients (mean age = 77 years) with AMD were recorded while they searched for target objects in natural images. Eight different image modification methods were implemented and included manipulations of local image or edge contrast, color, and crowding. In a subsequent task, patients ranked their preference of the image modifications. Results. Within individual participants, there was no significant difference in search duration or accuracy across eight different image manipulations. When data were collapsed across all image modifications, a multivariate model identified six significant predictors for normalized search duration including scotoma size and acuity, as well as interactions among scotoma size, age, acuity, and contrast (P < 0.05). Additionally, an analysis of image statistics showed no correlation with search performance across all image modifications. Rank ordering of enhancement methods based on participants' preference revealed a trend that participants preferred the least modified images (P < 0.05). Conclusions. There was no quantitative effect of image modification on search performance. A better understanding of low- and high-level components of visual search in natural scenes is necessary to improve future attempts at image enhancement for low vision patients. Different search tasks may require alternative image modifications to improve patient functioning and performance. PMID:22930725

  4. Spatial properties of objects predict patterns of neural response in the ventral visual pathway.

    PubMed

    Watson, David M; Young, Andrew W; Andrews, Timothy J

    2016-02-01

    Neuroimaging studies have revealed topographically organised patterns of response to different objects in the ventral visual pathway. These patterns are thought to be based on the form of the object. However, it is not clear what dimensions of object form are important. Here, we determined the extent to which spatial properties (energy across the image) could explain patterns of response in these regions. We compared patterns of fMRI response to images from different object categories presented at different retinal sizes. Although distinct neural patterns were evident to different object categories, changing the size (and thus the spatial properties) of the images had a significant effect on these patterns. Next, we used a computational approach to determine whether more fine-grained differences in the spatial properties can explain the patterns of neural response to different objects. We found that the spatial properties of the image were able to predict patterns of neural response, even when categorical factors were removed from the analysis. We also found that the effect of spatial properties on the patterns of response varies across the ventral visual pathway. These results show how spatial properties can be an important organising principle in the topography of the ventral visual pathway. PMID:26619786

  5. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  6. Adding a Visualization Feature to Web Search Engines: It’s Time

    SciTech Connect

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  7. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  8. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the results of a perplexity evaluation, the health information search patterns were best represented as a 5-gram sequence pattern. The most common patterns in group L1 were frequent query modifications, with relatively low search efficiency, and accessing and evaluating selected results from a health website. Group L2 performed frequent query modifications, but with better search efficiency, and accessed and evaluated selected results from a health website. Finally, the members of group L3 successfully discovered relevant results from the first query submission, performed verification by accessing several health websites after they discovered relevant results, and directly accessed consumer health information websites. Conclusions Familiarity with health topics affects health information search behaviors. Our analysis of state transitions in search activities detected unique behaviors and common search activity patterns in each familiarity group during health information searches. PMID:25783222

  9. Digital Pattern Search and Its Hybridization with Genetic Algorithms for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Nam-Geun; Park, Youngsu; Kim, Jong-Wook; Kim, Eunsu; Kim, Sang Woo

    In this paper, we present a recently developed pattern search method called Genetic Pattern Search algorithm (GPSA) for the global optimization of cost function subject to simple bounds. GPSA is a combined global optimization method using genetic algorithm (GA) and Digital Pattern Search (DPS) method, which has the digital structure represented by binary strings and guarantees convergence to stationary points from arbitrary starting points. The performance of GPSA is validated through extensive numerical experiments on a number of well known functions and on robot walking application. The optimization results confirm that GPSA is a robust and efficient global optimization method.

  10. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  11. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  12. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-06-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database. PMID:25861807

  13. Visual search strategies of soccer players in one-on-one defensive situations on the field.

    PubMed

    Nagano, Tomohisa; Kato, Takaaki; Fukuda, Tadahiko

    2004-12-01

    This study analyzed visual search strategies of soccer players in one-on-one defensive situations on the field. The 8 subjects were 4 experts and 4 novices. While subjects tackled an offensive player for possession of the ball, their eye movements were measured and analyzed. Statistically significant differences between the visual search strategies of experts and novices showed experts fixated more often on both the knee and the hip regions of opponents than novices did. This suggests that information gained from the movements of these areas was important in anticipating an opponent's next move. Findings suggest the importance in soccer for players not to focus too closely on the ball, but on an opponent's knee and hip. PMID:15648495

  14. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    PubMed

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). PMID:25576537

  15. Analysis and modeling of fixation point selection for visual search in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Hoffman, James; Ruda, Harald

    2000-07-01

    Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.

  16. Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System

    PubMed Central

    Manduchi, R.; Coughlan, J.; Ivanchenko, V.

    2016-01-01

    We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.

  17. Towards a framework for analysis of eye-tracking studies in the three dimensional environment: a study of visual search by experienced readers of endoluminal CT colonography

    PubMed Central

    Helbren, E; Phillips, P; Boone, D; Fanshawe, T R; Taylor, S A; Manning, D; Gale, A; Altman, D G; Mallett, S

    2014-01-01

    Objective: Eye tracking in three dimensions is novel, but established descriptors derived from two-dimensional (2D) studies are not transferable. We aimed to develop metrics suitable for statistical comparison of eye-tracking data obtained from readers of three-dimensional (3D) “virtual” medical imaging, using CT colonography (CTC) as a typical example. Methods: Ten experienced radiologists were eye tracked while observing eight 3D endoluminal CTC videos. Subsequently, we developed metrics that described their visual search patterns based on concepts derived from 2D gaze studies. Statistical methods were developed to allow analysis of the metrics. Results: Eye tracking was possible for all readers. Visual dwell on the moving region of interest (ROI) was defined as pursuit of the moving object across multiple frames. Using this concept of pursuit, five categories of metrics were defined that allowed characterization of reader gaze behaviour. These were time to first pursuit, identification and assessment time, pursuit duration, ROI size and pursuit frequency. Additional subcategories allowed us to further characterize visual search between readers in the test population. Conclusion: We propose metrics for the characterization of visual search of 3D moving medical images. These metrics can be used to compare readers' visual search patterns and provide a reproducible framework for the analysis of gaze tracking in the 3D environment. Advances in knowledge: This article describes a novel set of metrics that can be used to describe gaze behaviour when eye tracking readers during interpretation of 3D medical images. These metrics build on those established for 2D eye tracking and are applicable to increasingly common 3D medical image displays. PMID:24689842

  18. Improvement in Visual Search with Practice: Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    PubMed Central

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus–response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior–contralateral component (N2pc, 170–250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300–400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. PMID:25834059

  19. How does lesion conspicuity affect visual search strategy in mammogram reading?

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Hardesty, Lara

    2005-04-01

    In Mammography, gaze duration at given locations has been shown to positively correlate with decision outcome in those locations. Furthermore, most locations that contain an unreported malignant lesion attract the eye of experienced radiologists for almost as long as locations that contain correctly reported cancers. This suggests that faulty detection is not the main reason why cancers are missed; rather, failures in the perceptual and decision making processes in the location of these finding may be of significance as well. Models of medical image perception advocate that the decision to report or to dismiss a perceived finding depends not only on the finding itself but also on the background areas selected by the observer to compare the finding with, in order to determine its uniqueness. In this paper we studied the visual search strategy of experienced mammographers as they examined a case set containing cancer cases and lesion-free cases. For the cancer cases, two sets of mammograms were used: the ones in which the lesion was reported in the clinical practice, and the most recent prior mammogram. We determined how changes in lesion conspicuity between the prior mammogram to the most recent mammogram affected the visual search strategy of the observers. We represented the changes in visual search using spatial frequency analysis, and determined whether there were any significant differences between the prior and the most recent mammograms.

  20. Influence of being videotaped on the prevalence effect during visual search

    PubMed Central

    Miyazaki, Yuki

    2015-01-01

    Video monitoring modifies the task performance of those who are being monitored. The current study aims to prevent rare target-detection failures during visual search through the use of video monitoring. Targets are sometimes missed when their prevalence during visual search is extremely low (e.g., in airport baggage screenings). Participants performed a visual search in which they were required to discern the presence of a tool in the midst of other objects. The participants were monitored via video cameras as they performed the task in one session (the videotaped condition), and they performed the same task in another session without being monitored (the non-videotaped condition). The results showed that fewer miss errors occurred in the videotaped condition, regardless of target prevalence. It appears that the decrease in misses in the video monitoring condition resulted from a shift in criterion location. Video monitoring is considered useful in inducing accurate scanning. It is possible that the potential for evaluation involved in being observed motivates the participants to perform well and is related to the shift in criterion. PMID:25999895

  1. Model of visual contrast gain control and pattern masking

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Solomon, J. A.

    1997-01-01

    We have implemented a model of contrast gain and control in human vision that incorporates a number of key features, including a contrast sensitivity function, multiple oriented bandpass channels, accelerating nonlinearities, and a devisive inhibitory gain control pool. The parameters of this model have been optimized through a fit to the recent data that describe masking of a Gabor function by cosine and Gabor masks [J. M. Foley, "Human luminance pattern mechanisms: masking experiments require a new model," J. Opt. Soc. Am. A 11, 1710 (1994)]. The model achieves a good fit to the data. We also demonstrate how the concept of recruitment may accommodate a variant of this model in which excitatory and inhibitory paths have a common accelerating nonlinearity, but which include multiple channels tuned to different levels of contrast.

  2. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  3. Visual search in natural scenes: a double-dissociation paradigm for comparing observer models.

    PubMed

    Abrams, Jared; Geisler, Wilson

    2015-01-01

    Search is a fundamental and ubiquitous visual behavior. Here, we aim to model fixation search under naturalistic conditions and develop a strong test for comparing observer models. Previous work has identified the entropy limit minimization (ELM) observer as an optimal fixation selection model.1 The ELM observer selects fixations that maximally reduce uncertainty about the location of the target. However, this rule is optimal only if the detectability of the target falls off in the same way for every possible fixation (e.g., as in a uniform noise field). Most natural scenes do not satisfy this assumption; they are highly non-stationary. By combining empirical measurements of target detectability with a simple mathematical analysis, we arrive at a generalized ELM rule (nELM) that is optimal for non-stationary backgrounds. Then, we used the nELM rule to generate search time predictions for Gaussian blob targets embedded in hundreds of natural images. We also simulated a maximum a posteriori (MAP) observer, which is a common model in the search literature. To examine which model is more similar to human performance, we developed a double-dissociation search paradigm, selecting pairs of target locations where the nELM and the MAP observer made opposite predictions regarding search speed. By comparing the difference in human search times for each pair with the different model predictions, we can determine which model predictions are more similar to human behavior. Preliminary data from two observers show that human observers behave more like the nELM than the MAP. We conclude that the nELM observer is a useful normative model of fixation search and appears to be a good model of human search in natural scenes. Additionally, the proposed double-dissociation paradigm provides as a strong test for comparing competing models. 1Najemnik, J. & Geisler W.S. (2009) Vis. Res., 49, 1286-1294. Meeting abstract presented at VSS 2015. PMID:26326443

  4. Speed versus accuracy in visual search: Optimal performance and neural architecture.

    PubMed

    Chen, Bo; Perona, Pietro

    2015-12-01

    Searching for objects among clutter is a key ability of the visual system. Speed and accuracy are the crucial performance criteria. How can the brain trade off these competing quantities for optimal performance in different tasks? Can a network of spiking neurons carry out such computations, and what is its architecture? We propose a new model that takes input from V1-type orientation-selective spiking neurons and detects a target in the shortest time that is compatible with a given acceptable error rate. Subject to the assumption that the output of the primary visual cortex comprises Poisson neurons with known properties, our model is an ideal observer. The model has only five free parameters: the signal-to-noise ratio in a hypercolumn, the costs of false-alarm and false-reject errors versus the cost of time, and two parameters accounting for nonperceptual delays. Our model postulates two gain-control mechanisms-one local to hypercolumns and one global to the visual field-to handle variable scene complexity. Error rate and response time predictions match psychophysics data as we vary stimulus discriminability, scene complexity, and the uncertainty associated with each of these quantities. A five-layer spiking network closely approximates the optimal model, suggesting that known cortical mechanisms are sufficient for implementing visual search efficiently. PMID:26675879

  5. Toddlers' language-mediated visual search: they need not have the words for it.

    PubMed

    Johnson, Elizabeth K; McQueen, James M; Huettig, Falk

    2011-09-01

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts. PMID:21812709

  6. Modeling the Effect of Selection History on Pop-Out Visual Search

    PubMed Central

    Tseng, Yuan-Chi; Glaser, Joshua I.; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  7. The Development of Visual Search in Infants and Very Young Children.

    ERIC Educational Resources Information Center

    Gerhardstein, Peter; Rovee-Collier, Carolyn

    2002-01-01

    Trained 1- to 3-year-olds to touch a video screen displaying a unique target and appearing among varying numbers of distracters; correct responses triggered a sound and four animated objects on the screen. Found that children's reaction time patterns resembled those from adults in corresponding search tasks, suggesting that basic perceptual…

  8. Visual Search and Attention in Blue Jays (Cyanocitta cristata): Associative Cuing and Sequential Priming

    PubMed Central

    Goto, Kazuhiro; Bond, Alan B.; Burks, Marianna; Kamil, Alan C.

    2014-01-01

    Visual search for complex natural targets requires focal attention, either cued by predictive stimulus associations or primed by a representation of the most recently detected target. Since both processes can focus visual attention, cuing and priming were compared in an operant search task to evaluate their relative impacts on performance and to determine the nature of their interaction in combined treatments. Blue jays were trained to search for pairs of alternative targets among distractors. Informative or ambiguous color cues were provided prior to each trial, and targets were presented either in homogeneous blocked sequences or in constrained random order. Initial task acquisition was facilitated by priming in general, but was significantly retarded when targets were both cued and primed, indicating that the two processes interfered with each other during training. At asymptote, attentional effects were manifested mainly in inhibition, increasing latency in miscued trials and decreasing accuracy on primed trials following an unexpected target switch. A combination of cuing and priming was found to interfere with performance in such unexpected trials, apparently a result of the limited capacity of working memory. Because the ecological factors that promote priming and cuing are rather disparate, it is not clear whether they ever jointly and simultaneously contribute to natural predatory search. PMID:24893217

  9. Visual search and attention in blue jays (Cyanocitta cristata): Associative cuing and sequential priming.

    PubMed

    Goto, Kazuhiro; Bond, Alan B; Burks, Marianna; Kamil, Alan C

    2014-04-01

    Visual search for complex natural targets requires focal attention, either cued by predictive stimulus associations or primed by a representation of the most recently detected target. Because both processes can focus visual attention, cuing and priming were compared in an operant search task to evaluate their relative impacts on performance and to determine the nature of their interaction in combined treatments. Blue jays were trained to search for pairs of alternative targets among distractors. Informative or ambiguous color cues were provided before each trial, and targets were presented either in homogeneous blocked sequences or in constrained random order. Initial task acquisition was facilitated by priming in general, but was significantly retarded when targets were both cued and primed, indicating that the two processes interfered with each other during training. At asymptote, attentional effects were manifested mainly in inhibition, increasing latency in miscued trials and decreasing accuracy on primed trials following an unexpected target switch. A combination of cuing and priming was found to interfere with performance in such unexpected trials, apparently a result of the limited capacity of working memory. Because the ecological factors that promote priming or cuing are rather disparate, it is not clear whether they ever simultaneously contribute to natural predatory search. PMID:24893217

  10. How much agreement is there in the visual search strategy of experts reading mammograms?

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia

    2008-03-01

    Previously we have shown that the eyes of expert breast imagers are attracted to the location of a malignant mass in a mammogram in less than 2 seconds after image onset. Moreover, the longer they take to visually fixate the location of the mass, the less likely it is that they will report it. We conjectured that this behavior was due to the formation of the initial hypothesis about the image (i.e., 'normal' - no lesions to report, or 'abnormal' - possible lesions to report). This initial hypothesis is formed as a result of a difference template between the experts' expectations of the image and the actual image. Hence, when the image is displayed, the expert detects the areas that do not correspond to their 'a priori expectation', and these areas get assigned weights according to the magnitude of the perturbation. The radiologist then uses eye movements to guide the high resolution fovea to each of these locations, in order to resolve each perturbation. To accomplish this task successfully the radiologist uses not only the local features in the area but also lateral comparisons with selected background locations, and this comprises the radiologist's visual search strategy. Eye-position tracking studies seem to suggest that no two radiologists search the breast parenchyma alike, which makes one wonder whether successful search models can be developed. In this study we show that there is more to the experts' search strategy than meets the eye.

  11. Training shortens search times in children with visual impairment accompanied by nystagmus

    PubMed Central

    Huurneman, Bianca; Boonstra, F. Nienke

    2014-01-01

    Perceptual learning (PL) can improve near visual acuity (NVA) in 4–9 year old children with visual impairment (VI). However, the mechanisms underlying improved NVA are unknown. The present study compares feature search and oculomotor measures in 4–9 year old children with VI accompanied by nystagmus (VI+nys [n = 33]) and children with normal vision (NV [n = 29]). Children in the VI+nys group were divided into three training groups: an experimental PL group, a control PL group, and a magnifier group. They were seen before (baseline) and after 6 weeks of training. Children with NV were only seen at baseline. The feature search task entailed finding a target E among distractor E's (pointing right) with element spacing varied in four steps: 0.04°, 0.5°, 1°, and 2°. At baseline, children with VI+nys showed longer search times, shorter fixation durations, and larger saccade amplitudes than children with NV. After training, all training groups showed shorter search times. Only the experimental PL group showed prolonged fixation duration after training at 0.5° and 2° spacing, p's respectively 0.033 and 0.021. Prolonged fixation duration was associated with reduced crowding and improved crowded NVA. One of the mechanisms underlying improved crowded NVA after PL in children with VI+nys seems to be prolonged fixation duration. PMID:25309473

  12. White matter hyperintensities are associated with visual search behavior independent of generalized slowing in aging.

    PubMed

    Lockhart, Samuel N; Roach, Alexandra E; Luck, Steven J; Geng, Joy; Beckett, Laurel; Carmichael, Owen; DeCarli, Charles

    2014-01-01

    A fundamental controversy is whether cognitive decline with advancing age can be entirely explained by decreased processing speed, or whether specific neural changes can elicit cognitive decline, independent of slowing. These hypotheses are anchored by studies of healthy older individuals where age is presumed the sole influence. Unfortunately, advancing age is also associated with asymptomatic brain white matter injury. We hypothesized that differences in white matter injury extent, manifest by MRI white matter hyperintensities (WMH), mediate differences in visual attentional control in healthy aging, beyond processing speed differences. We tested young and cognitively healthy older adults on search tasks indexing speed and attentional control. Increasing age was associated with generally slowed performance. WMH were also associated with slowed search times independent of processing speed differences. Consistent with evidence attributing reduced network connectivity to WMH, these results conclusively demonstrate that clinically silent white matter injury contributes to slower search performance indicative of compromised cognitive control, independent of generalized slowing of processing speed. PMID:24183716

  13. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    PubMed

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

  14. Patterned-String Tasks: Relation between Fine Motor Skills and Visual-Spatial Abilities in Parrots

    PubMed Central

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

  15. Student Written Errors and Teacher Marking: A Search for Patterns.

    ERIC Educational Resources Information Center

    Belanger, J. F.

    A study examined whether patterns exist in the kinds and amounts of writing errors students make and whether teachers follow any sort of pattern in correcting these errors. Sixty compositions, gathered from a twelfth grade class taught by one teacher, were analyzed using the "McGraw-Hill Handbook of English." Student written errors were classified…

  16. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and 48,084 for “Egor Bychkov”, compared to 53,403 for “Khimki” in Yandex. We found Google potentially provides timely search results, whereas Yandex provides more accurate geographic localization. The correlation was moderate to strong between search terms representing the Bychkov episode and terms representing salient drug issues in Yandex–“illicit drug treatment” (r s = .90, P < .001), "illicit drugs" (r s = .76, P < .001), and "drug addiction" (r s = .74, P < .001). Google correlations were weaker or absent–"illicit drug treatment" (r s = .12, P = .58), “illicit drugs ” (r s = -0.29, P = .17), and "drug addiction" (r s = .68, P < .001). Conclusions This study contributes to the methodological literature on the analysis of search patterns for public health. This paper investigated the relationship between Google and Yandex, and contributed to the broader methods literature by highlighting both the potential and limitations of these two search providers. We believe that Yandex Wordstat is a potentially valuable, and underused data source for researchers working on Russian-related illicit drug policy and other public health problems. The Russian Federation, with its large, geographically dispersed, and politically engaged online population presents unique opportunities for studying the evolving influence of the Internet on politics and policy, using low cost methods resilient against potential increases in censorship. PMID:23238600

  17. The dynamics of attentional sampling during visual search revealed by Fourier analysis of periodic noise interference.

    PubMed

    Dugué, Laura; Vanrullen, Rufin

    2014-01-01

    What are the temporal dynamics of perceptual sampling during visual search tasks, and how do they differ between a difficult (or inefficient) and an easy (or efficient) task? Does attention focus intermittently on the stimuli, or are the stimuli processed continuously over time? We addressed these questions by way of a new paradigm using periodic fluctuations of stimulus information during a difficult (color-orientation conjunction) and an easy (+ among Ls) search task. On each stimulus, we applied a dynamic visual noise that oscillated at a given frequency (2-20 Hz, 2-Hz steps) and phase (four cardinal phase angles) for 500 ms. We estimated the dynamics of attentional sampling by computing an inverse Fourier transform on subjects' d-primes. In both tasks, the sampling function presented a significant peak at 2 Hz; we showed that this peak could be explained by nonperiodic search strategies such as increased sensitivity to stimulus onset and offset. Specifically in the difficult task, however, a second, higher-frequency peak was observed at 9 to 10 Hz, with a similar phase for all subjects; this isolated frequency component necessarily entails oscillatory attentional dynamics. In a second experiment, we presented difficult search arrays with dynamic noise that was modulated by the previously obtained grand-average attention sampling function or by its converse function (in both cases omitting the 2 Hz component to focus on genuine oscillatory dynamics). We verified that performance was higher in the latter than in the former case, even for subjects who had not participated in the first experiment. This study supports the idea of a periodic sampling of attention during a difficult search task. Although further experiments will be needed to extend these findings to other search tasks, the present report validates the usefulness of this novel paradigm for measuring the temporal dynamics of attention. PMID:24525262

  18. Job Search Patterns of College Graduates: The Role of Social Capital

    ERIC Educational Resources Information Center

    Coonfield, Emily S.

    2012-01-01

    This dissertation addresses job search patterns of college graduates and the implications of social capital by race and class. The purpose of this study is to explore (1) how the job search transpires for recent college graduates, (2) how potential social networks in a higher educational context, like KU, may make a difference for students with…

  19. Visual illusions in predator-prey interactions: birds find moving patterned prey harder to catch.

    PubMed

    Hämäläinen, Liisa; Valkonen, Janne; Mappes, Johanna; Rojas, Bibiana

    2015-09-01

    Several antipredator strategies are related to prey colouration. Some colour patterns can create visual illusions during movement (such as motion dazzle), making it difficult for a predator to capture moving prey successfully. Experimental evidence about motion dazzle, however, is still very scarce and comes only from studies using human predators capturing moving prey items in computer games. We tested a motion dazzle effect using for the first time natural predators (wild great tits, Parus major). We used artificial prey items bearing three different colour patterns: uniform brown (control), black with elongated yellow pattern and black with interrupted yellow pattern. The last two resembled colour patterns of the aposematic, polymorphic dart-poison frog Dendrobates tinctorius. We specifically tested whether an elongated colour pattern could create visual illusions when combined with straight movement. Our results, however, do not support this hypothesis. We found no differences in the number of successful attacks towards prey items with different patterns (elongated/interrupted) moving linearly. Nevertheless, both prey types were significantly more difficult to catch compared to the uniform brown prey, indicating that both colour patterns could provide some benefit for a moving individual. Surprisingly, no effect of background (complex vs. plain) was found. This is the first experiment with moving prey showing that some colour patterns can affect avian predators' ability to capture moving prey, but the mechanisms lowering the capture rate are still poorly understood. PMID:25947086

  20. Pattern identification or 3D visualization? How best to learn topographic map comprehension

    NASA Astrophysics Data System (ADS)

    Atit, Kinnari

    Science, Technology, Engineering, and Mathematics (STEM) experts employ many representations that novices find hard to use because they require a critical STEM skill, interpreting two-dimensional (2D) diagrams that represent three-dimensional (3D) information. The current research focuses on learning to interpret topographic maps. Understanding topographic maps requires knowledge of how to interpret the conventions of contour lines, and skill in visualizing that information in 3D (e.g. shape of the terrain). Novices find both tasks difficult. The present study compared two interventions designed to facilitate understanding for topographic maps to minimal text-only instruction. The 3D Visualization group received instruction using 3D gestures and models to help visualize three topographic forms. The Pattern Identification group received instruction using pointing and tracing gestures to help identify the contour patterns associated with the three topographic forms. The Text-based Instruction group received only written instruction explaining topographic maps. All participants then completed a measure of topographic map use. The Pattern Identification group performed better on the map use measure than participants in the Text-based Instruction group, but no significant difference was found between the 3D Visualization group and the other two groups. These results suggest that learning to identify meaningful contour patterns is an effective strategy for learning how to comprehend topographic maps. Future research should address if learning strategies for how to interpret the information represented on a diagram (e.g. identify patterns in the contour lines), before trying to visualize the information in 3D (e.g. visualize the 3D structure of the terrain), also facilitates students' comprehension of other similar types of diagrams.

  1. Gender Differences in Patterns of Searching the Web

    ERIC Educational Resources Information Center

    Roy, Marguerite; Chi, Michelene T. H.

    2003-01-01

    There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

  2. Optimization of boiling water reactor control rod patterns using linear search

    SciTech Connect

    Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.

    1984-10-01

    A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor.

  3. Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.

    PubMed

    Chassy, Philippe; Gobet, Fernand

    2013-01-01

    Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

  4. Visual Search in Ecological and Non-Ecological Displays: Evidence for a Non-Monotonic Effect of Complexity on Performance

    PubMed Central

    Chassy, Philippe; Gobet, Fernand

    2013-01-01

    Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a “pop out” effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

  5. Frames of reference for the light-from-above prior in visual search and shape judgements.

    PubMed

    Adams, Wendy J

    2008-04-01

    Faced with highly complex and ambiguous visual input, human observers must rely on prior knowledge and assumptions to efficiently determine the structure of their surroundings. One of these assumptions is the 'light-from-above' prior. In the absence of explicit light-source information, the visual system assumes that the light-source is roughly overhead. A simple, low-cost strategy would place this 'light-from-above' prior in a retinal frame of reference. A more complex, but optimal strategy would be to assume that the light-source is gravitationally up, and compensate for observer orientation. Evidence to support one or other strategy from psychophysics and neurophysiology has been mixed. This paper pits the gravitational and retinal frames against each other in two different visual tasks that relate to the light-from-above prior. In the first task, observers had to report the presence or absence of a target where distractors and target were defined purely by shading. In the second task, observers made explicit shape judgements of similar stimuli. The orientation of the stimuli varied across trials and the observer's head was fixed at 0, +/-45 or +/-60 degrees . In both tasks the retinal frame of reference dominated. Visual search behaviour with shape-from-shading stimuli (SFS) was modulated purely by stimulus orientation relative to the retina. However, the gravitational frame of reference had a significant effect on shape judgements, with a 30% correction for observer orientation. In other words, shading information is processed quite differently depending on the demands of the current task. When a 'quick and dirty' representation is required to drive fast, efficient search, that is what the visual system provides. In contrast, when the task is to explicitly estimate shape, extra processing to compensate for head orientation precedes the perceptual judgment. These results are consistent with current neurophysiological data on SFS if we re-frame compensation for observer orientation as a cue-combination problem. PMID:17950264

  6. The impact of clinical indications on visual search behaviour in skeletal radiographs

    NASA Astrophysics Data System (ADS)

    Rutledge, A.; McEntee, M. F.; Rainford, L.; O'Grady, M.; McCarthy, K.; Butler, M. L.

    2011-03-01

    The hazards associated with ionizing radiation have been documented in the literature and therefore justifying the need for X-ray examinations has come to the forefront of the radiation safety debate in recent years1. International legislation states that the referrer is responsible for the provision of sufficient clinical information to enable the justification of the medical exposure. Clinical indications are a set of systematically developed statements to assist in accurate diagnosis and appropriate patient management2. In this study, the impact of clinical indications upon fracture detection for musculoskeletal radiographs is analyzed. A group of radiographers (n=6) interpreted musculoskeletal radiology cases (n=33) with and without clinical indications. Radiographic images were selected to represent common trauma presentations of extremities and pelvis. Detection of the fracture was measured using ROC methodology. An eyetracking device was employed to record radiographers search behavior by analysing distinct fixation points and search patterns, resulting in a greater level of insight and understanding into the influence of clinical indications on observers' interpretation of radiographs. The influence of clinical information on fracture detection and search patterns was assessed. Findings of this study demonstrate that the inclusion of clinical indications result in impressionable search behavior. Differences in eye tracking parameters were also noted. This study also attempts to uncover fundamental observer search strategies and behavior with and without clinical indications, thus providing a greater understanding and insight into the image interpretation process. Results of this study suggest that availability of adequate clinical data should be emphasized for interpreting trauma radiographs.

  7. Identification of the ideal clutter metric to predict time dependence of human visual search

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.

  8. Are visual search procedures adapted to the nature of the script?

    PubMed

    Green, D W; Liow, S J; Tng, S K; Zielinski, S

    1996-05-01

    Letters are processed differently from other shapes in a visual search task where subjects have to decide whether or not a predesignated target symbol is present in a subsequently presented string of five such symbols. If the M-shaped letter search function, which relates correct reaction time to target position, reflects an efficient strategy used in word recognition, it should be produced by skilled readers of English who also read a logographic script. A cross-linguistic study of biscriptal Mandarin/English and monoscriptal English readers (Expt 1) provided evidence of the generality of a basic search strategy for alphabetic targets. Hand-of-response affected the search function in an asymmetric fashion for both groups of readers, and although case differences between target and string increased reaction times overall, the classic M-shaped function remained. In Expt 2, we used a within-subjects design and examined the extent to which biscriptal Mandarin/English readers produced different search functions for letters and logographs. Consistent with expectation, these readers showed an M-shaped function for letters but a more U-shaped function for logographs. Hand-of-response exerted a consistent effect for both types of material. Taken together, these experiments support the view that skilled readers develop script-specific procedures. PMID:8673360

  9. STATIONARY PATTERN ADAPTATION AND THE EARLY COMPONENTS IN HUMAN VISUAL EVOKED POTENTIALS

    EPA Science Inventory

    Pattern-onset visual evoked potentials were elicited from humans by sinusoidal gratings of 0.5., 1, 2 and 4 cpd (cycles/degree) following adaptation to a blank field or one of the gratings. The wave forms recorded after blank field adaptation showed an early positive component, P...

  10. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns

  11. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  12. Characteristics of Empirically Derived Subgroups Based on Intelligence and Visual-Motor Score Patterns.

    ERIC Educational Resources Information Center

    Snow, Jeffrey H.; Desch, Larry W.

    1989-01-01

    Cluster-analyzed results from intelligence and visual-motor measures of children (N=1,204) referred for academic and/or behavior problems. Found five subgroups with three of the five showing more dysfunctional patterns than other two. Results suggest influence of physiological/developmental factors with development of learning difficulties.…

  13. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  14. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  15. Visualization of Flow Patterns in the Bonneville 2nd Powerhouse Forebay

    SciTech Connect

    Serkowski, John A.; Rakowski, Cynthia L.; Ebner, Laurie L.

    2002-12-31

    Three-dimensional (3D) computational fluid dynamics (CFD) models are increasingly being used to study forebay and tailrace flow systems associated with hydroelectric projects. This paper describes the fundamentals of creating effective 3D data visualizations from CFD model results using a case study from the Bonneville Dam. These visualizations enhance the utility of CFD models by helping the researcher and end user better understand the model results. To develop visualizations for the Bonneville Dam forebay model, we used specialized, but commonly available software and a standard high-end microprocessor workstation. With these tools we were able to compare flow patterns among several operational scenarios by producing a variety of contour, vector, stream-trace, and vortex-core plots. The differences in flow patterns we observed could impact efforts to divert downstream-migrating fish around powerhouse turbines.

  16. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  17. Determinants of dwell time in visual search: similarity or perceptual difficulty?

    PubMed

    Becker, Stefanie I

    2011-01-01

    The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item's similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C's distractors were longer than dwell times on Landolt C's with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C's because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control. PMID:21408139

  18. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  19. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  20. Visual Signals Vertically Extend the Perceptual Span in Searching a Text: A Gaze-Contingent Window Study

    ERIC Educational Resources Information Center

    Cauchard, Fabrice; Eyrolle, Helene; Cellier, Jean-Marie; Hyona, Jukka

    2010-01-01

    This study investigated the effect of visual signals on perceptual span in text search and the kinds of signal information that facilitate the search. Participants were asked to find answers to specific questions in chapter-length texts in either a normal or a window condition, where the text disappeared beyond a vertical 3 degrees gaze-contingent…

  1. An instructive role for patterned spontaneous retinal activity in mouse visual map development

    PubMed Central

    Xu, Hong-ping; Furman, Moran; Mineur, Yann S.; Chen, Hui; King, Sarah L.; Zenisek, David; Zhou, Z. Jimmy; Butts, Daniel A.; Tian, Ning; Picciotto, Marina R.; Crair, Michael C.

    2011-01-01

    SUMMARY Complex neural circuits in the mammalian brain develop through a combination of genetic instruction and activity-dependent refinement. The relative role of these factors and the form of neuronal activity responsible for circuit development is a matter of significant debate. In the mammalian visual system, retinal ganglion cell projections to the brain are mapped with respect to retinotopic location and eye of origin. We manipulated the pattern of spontaneous retinal waves present during development without changing overall activity levels through the transgenic expression of ?2-nicotinic acetylcholine receptors in retinal ganglion cells of mice. We used this manipulation to demonstrate that spontaneous retinal activity is not just permissive, but instructive in the emergence of eye-specific segregation and retinotopic refinement in the mouse visual system. This suggests that specific patterns of spontaneous activity throughout the developing brain are essential in the emergence of specific and distinct patterns of neuronal connectivity. PMID:21689598

  2. Topographical patterns of ERP elicited by different visual information-processing tasks.

    PubMed

    Kotani, K; Freivalds, A; Horii, K

    1993-08-01

    Topographical patterns of event-related potentials were compared on a visual display terminal for a data-input task and a number-comparison task. Maximum negative peaks were found in the frontal and central regions for the former but at midline locations for the latter. Latencies were shorter in the occipital regions than in the frontal regions for the former and the opposite pattern was found for the latter. An analysis of variance indicated that hemispheric location significantly affected the amplitude of peaks. On the other hand, latencies were affected by the task, frontal and occipital regions, and their interaction. These results suggest that a pattern of the topographic display of event-related potentials can be used as an objective means for classifying visual tasks. PMID:8367255

  3. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  4. Visual search strategies of baseball batters: eye movements during the preparatory phase of batting.

    PubMed

    Kato, Takaaki; Fukuda, Tadahiko

    2002-04-01

    The aim of this study was to analyze visual search strategies of baseball batters during the viewing period of the pitcher's motion. The 18 subjects were 9 experts and 9 novices. While subjects viewed a videotape which, from a right-handed batter's perspective, showed a pitcher throwing a series of 10 types of pitches, their eye movements were measured and analyzed. Novices moved their eyes faster than experts, and the distribution area of viewing points was also wider than that of the experts. The viewing duration of experts of the pitching arm was longer than those of novices during the last two pitching phases. These results indicate that experts set their visual pivot on the pitcher's elbow and used peripheral vision properties to evaluate the pitcher's motion and the ball trajectory. PMID:12027326

  5. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation

    PubMed Central

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-01-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  6. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  7. The importance of being expert: top-down attentional control in visual search with photographs.

    PubMed

    Hershler, Orit; Hochstein, Shaul

    2009-10-01

    Two observers looking at the same picture may not see the same thing. To avoid sensory overload, visual information is actively selected for further processing by bottom-up processes, originating within the visual image, and top-down processes, reflecting the motivation and past experiences of the observer. The latter processes could grant categories of significance to the observer a permanent attentional advantage. Nevertheless, evidence for a generalized top-down advantage for specific categories has been limited. In this study, bird and car experts searched for face, car, or bird photographs in a heterogeneous display of photographs of real objects. Bottom-up influences were ruled out by presenting both groups of experts with identical displays. Faces and targets of expertise had a clear advantage over novice targets, indicating a permanent top-down preference for favored categories. A novel type of analysis of reaction times over the visual field suggests that the advantage for expert objects is achieved by broader detection windows, allowing observers to scan greater parts of the visual field for the presence of favored targets during each fixation. PMID:19801608

  8. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  9. Examining wide-arc digital breast tomosynthesis: optimization using a visual-search model observer

    NASA Astrophysics Data System (ADS)

    Das, Mini; Liang, Zhihua; Gifford, Howard C.

    2015-03-01

    Mathematical model observers are expected to assist in preclinical optimization of image acquisition and reconstruction parameters. A clinically realistic and robust model observer platform could help in multiparameter optimizations without requiring frequent human-observer validations. We are developing search-capable visual-search (VS) model observers with this potential. In this work, we present initial results on optimization of DBT scan angle and the number of projection views for low-contrast mass detection. Comparison with human-observer results shows very good agreement. These results point towards the benefits of using relatively wider arcs and low projection angles per arc degree for improved mass detection. These results are particularly interesting considering that the FDA-approved DBT systems like Hologic Selenia Dimensions uses a narrow (15-degree) acquisition arc and one projection per arc degree.

  10. Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues.

    PubMed

    Kugler, Günter; 't Hart, Bernard Marius; Kohlbecher, Stefan; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In "positive cue" blocks, participants were informed about possible colors of the target item. In "negative cue" blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line ("relevant items"). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value. PMID:26717307

  11. Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues

    PubMed Central

    Kohlbecher, Stefan; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In “positive cue” blocks, participants were informed about possible colors of the target item. In “negative cue” blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line (“relevant items”). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value. PMID:26717307

  12. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  13. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    NASA Astrophysics Data System (ADS)

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-01-01

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  14. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  15. You can detect the trees as well as the forest when adding the leaves: evidence from visual search tasks containing three-level hierarchical stimuli.

    PubMed

    Krakowski, Claire-Sara; Borst, Grégoire; Pineau, Arlette; Houdé, Olivier; Poirel, Nicolas

    2015-05-01

    The present study investigated how multiple levels of hierarchical stimuli (i.e., global, intermediate and local) are processed during a visual search task. Healthy adults participated in a visual search task in which a target was either present or not at one of the three levels of hierarchical stimuli (global geometrical form made by intermediate forms themselves constituted by local forms). By varying the number of distractors, the results showed that targets presented at global and intermediate levels were detected efficiently (i.e., the detection times did not vary with the number of distractors) whereas local targets were processed less efficiently (i.e., the detection times increased with the number of distractors). Additional experiments confirmed that these results were not due to the size of the target elements or to the spatial proximity among the structural levels. Taken together, these results show that the most local level is always processed less efficiently, suggesting that it is disadvantaged during the competition for attentional resources compared to higher structural levels. The present study thus supports the view that the processing occurring in visual search acts dichotomously rather than continuously. Given that pure structuralist and pure space-based models of attention cannot account for the pattern of our findings, we discuss the implication for perception, attentional selection and executive control of target position on hierarchical stimuli. PMID:25796055

  16. For better or worse: Prior trial accuracy affects current trial accuracy in visual search.

    PubMed

    Winkle, Jonathan; Biggs, Adam; Ericson, Justin; Mitroff, Stephen

    2015-01-01

    Life is not a series of independent events, but rather, each event is influenced by what just happened and what might happen next. However, many research studies treat any given trial as an independent and isolated event. Some research fields explicitly test trial-to-trial influences (e.g., repetition priming, task switching), but many, including visual search, largely ignore potential inter-trial effects. While trial-order effects could wash out with random presentation orders, this does not diminish their potential impact (e.g., would you want your radiologist to be negatively affected by his/her prior success in screening for cancer?). To examine biases related to prior trial performance, data were analyzed from airport security officers and Duke University participants who had completed a visual search task. Participants searched for a target "T" amongst "pseudo-L" distractors with 50% of trials containing a target. Four set sizes were used (8,16,24,32), and participants completed the search task without feedback. Inter-trial analyses revealed that accuracy for the current trial was related to the outcome of the previous trial, with trials following successful searches being approximately 10% more accurate than trials following failed searches. Pairs of target-absent or target-present trials predominantly drove this effect; specifically, accuracy on target-present trials was contingent on a previous hit or miss (i.e., other target-present trials), while accuracy on target-absent trials was contingent on a previous correct rejection or false alarm (i.e., other target-absent trials). Inter-trial effects arose in both population samples and were not driven by individual differences, as assessed by mixed-effects linear modeling. These results have both theoretical and practical implications. Theoretically, it is worth considering how to control for inter-trial variance in statistical models of behavior. Practically, characterizing the conditions that modulate inter-trial effects might help professionals searchers perform more accurately, which can have life-saving consequences. Meeting abstract presented at VSS 2015. PMID:26327059

  17. Static Magnetic Field Stimulation over the Visual Cortex Increases Alpha Oscillations and Slows Visual Search in Humans.

    PubMed

    Gonzalez-Rosa, Javier J; Soto-Leon, Vanesa; Real, Pablo; Carrasco-Lopez, Carmen; Foffani, Guglielmo; Strange, Bryan A; Oliviero, Antonio

    2015-06-17

    Transcranial static magnetic field stimulation (tSMS) was recently introduced as a promising tool to modulate human cerebral excitability in a noninvasive and portable way. However, a demonstration that static magnetic fields can influence human brain activity and behavior is currently lacking, despite evidence that static magnetic fields interfere with neuronal function in animals. Here we show that transcranial application of a static magnetic field (120-200 mT at 2-3 cm from the magnet surface) over the human occiput produces a focal increase in the power of alpha oscillations in underlying cortex. Critically, this neurophysiological effect of tSMS is paralleled by slowed performance in a visual search task, selectively for the most difficult target detection trials. The typical relationship between prestimulus alpha power over posterior cortical areas and reaction time (RT) to targets during tSMS is altered such that tSMS-dependent increases in alpha power are associated with longer RTs for difficult, but not easy, target detection trials. Our results directly demonstrate that a powerful magnet placed on the scalp modulates normal brain activity and induces behavioral changes in humans. PMID:26085640

  18. Pattern drilling exploration: Optimum pattern types and hole spacings when searching for elliptical shaped targets

    USGS Publications Warehouse

    Drew, L.J.

    1979-01-01

    In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.

  19. Timing of saccadic eye movements during visual search for multiple targets.

    PubMed

    Wu, Chia-Chien; Kowler, Eileen

    2013-01-01

    Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing. We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. PMID:24049045

  20. Timing of saccadic eye movements during visual search for multiple targets

    PubMed Central

    Wu, Chia-Chien; Kowler, Eileen

    2013-01-01

    Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing. We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. PMID:24049045

  1. Multimodal neuroimaging evidence linking memory and attention systems during visual search cued by context.

    PubMed

    Kasper, Ryan W; Grafton, Scott T; Eckstein, Miguel P; Giesbrecht, Barry

    2015-03-01

    Visual search can be facilitated by the learning of spatial configurations that predict the location of a target among distractors. Neuropsychological and functional magnetic resonance imaging (fMRI) evidence implicates the medial temporal lobe (MTL) memory system in this contextual cueing effect, and electroencephalography (EEG) studies have identified the involvement of visual cortical regions related to attention. This work investigated two questions: (1) how memory and attention systems are related in contextual cueing; and (2) how these systems are involved in both short- and long-term contextual learning. In one session, EEG and fMRI data were acquired simultaneously in a contextual cueing task. In a second session conducted 1 week later, EEG data were recorded in isolation. The fMRI results revealed MTL contextual modulations that were correlated with short- and long-term behavioral context enhancements and attention-related effects measured with EEG. An fMRI-seeded EEG source analysis revealed that the MTL contributed the most variance to the variability in the attention enhancements measured with EEG. These results support the notion that memory and attention systems interact to facilitate search when spatial context is implicitly learned. PMID:25586959

  2. A convergence analysis of unconstrained and bound constrained evolutionary pattern search.

    PubMed

    Hart, W E

    2001-01-01

    We present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R(n): evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. We show that EPSAs can be cast as stochastic pattern search methods, and we use this observation to prove that EPSAs have a probabilistic, weak stationary point convergence theory. This convergence theory is distinguished by the fact that the analysis does not approximate the stochastic process of EPSAs, and hence it exactly characterizes their convergence properties. PMID:11290281

  3. A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search

    SciTech Connect

    Hart, W.E.

    1999-04-22

    The authors present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R{sup n}: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. They show that EPSAs can be cast as stochastic pattern search methods, and they use this observation to prove that EpSAs have a probabilistic weak stationary point convergence theory. This work provides the first convergence analysis for a class of evolutionary algorithms that guarantees convergence almost surely to a stationary point of a nonconvex objective function.

  4. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  5. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1983-01-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  6. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Astrophysics Data System (ADS)

    Gaugler, R. E.; Russell, L. M.

    1983-03-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  7. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1984-01-01

    Various flow visualization techniques were used to define the seondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious. Previously announced in STAR as N83-14435

  8. Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search.

    PubMed

    Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

    2014-10-01

    People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject groups showed similar target-relevant effects. These findings argue for an attentional deficit in ASD that is disproportionate for social stimuli, cannot be explained by low-level visual properties of the stimuli, and is more severe with high-load top-down task demands. Furthermore, this deficit appears to be independent of the amygdala, and not evident from general social bias independent of the target-directed search. PMID:25218953

  9. Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search

    PubMed Central

    Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

    2015-01-01

    People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject groups showed similar target-relevant effects. These findings argue for an attentional deficit in ASD that is disproportionate for social stimuli, cannot be explained by low-level visual properties of the stimuli, and is more severe with high-load top-down task demands. Furthermore, this deficit appears to be independent of the amygdala, and not evident from general social bias independent of the target-directed search. PMID:25218953

  10. Visual Scanning Patterns during the Dimensional Change Card Sorting Task in Children with Autism Spectrum Disorder

    PubMed Central

    Yi, Li; Liu, Yubing; Li, Yunyi; Fan, Yuebo; Huang, Dan; Gao, Dingguo

    2012-01-01

    Impaired cognitive flexibility in children with autism spectrum disorder (ASD) has been reported in previous literature. The present study explored ASD children's visual scanning patterns during the Dimensional Change Card Sorting (DCCS) task using eye-tracking technique. ASD and typical developing (TD) children completed the standardized DCCS procedure on the computer while their eye movements were tracked. Behavioral results confirmed previous findings on ASD children's deficits in executive function. ASD children's visual scanning patterns also showed some specific underlying processes in the DCCS task compared to TD children. For example, ASD children looked shorter at the correct card in the postswitch phase and spent longer time at blank areas than TD children did. ASD children did not show a bias to the color dimension as TD children did. The correlations between the behavioral performance and eye moments were also discussed. PMID:23050145

  11. Visual Circuit Development Requires Patterned Activity Mediated by Retinal Acetylcholine Receptors

    PubMed Central

    Burbridge, Timothy J.; Xu, Hong-Ping; Ackman, James B.; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z. Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C.

    2014-01-01

    SUMMARY The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial largely due to the inherent difficulty recording neural activity in early development. Here, we describe novel genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a de-coupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916

  12. Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory

    PubMed Central

    Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

    2012-01-01

    Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

  13. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns.

    PubMed

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects. PMID:25169944

  14. Giant honeybees ( Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees ( Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  15. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-08-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  16. Multilinear model for spatial pattern analysis of the Measurement of Haze and Visual Effects project.

    PubMed

    Chueinta, Wanna; Hopke, Philip K; Paatero, Pentti

    2004-01-15

    A multilinear model was developed for the analysis of the spatial patterns and possible sources affecting haze and its visual effects in the southwestern United States. The data from the project Measurement of Haze and Visual Effects (MOHAVE) collected during the late winter and mid-summer of 1992 at the monitoring sites in four states (i.e., California, Arizona, Nevada and Utah) were used in the study. The three-way data array was analyzed by a four-product-term model. This study makes a direct effort to include wind patterns as a component in the model in order to obtain the information of the spatial patterns of source contributions. The solution is computed using the conjugate gradient algorithm with applied non-negativity constraints. For the winter data set, reasonable solutions contained six sources and six wind patterns. The analysis of summer data required seven sources and seven wind patterns. The ME results are compared to the prior single-species empirical orthogonal function analysis results--and prior work describing the transport pathways. PMID:14750732

  17. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis. PMID:22656866

  18. Visual Search Strategies of Soccer Players Executing a Power vs. Placement Penalty Kick

    PubMed Central

    Timmis, Matthew A.; Turner, Kieran; van Paridon, Kjell N.

    2014-01-01

    Introduction When taking a soccer penalty kick, there are two distinct kicking techniques that can be adopted; a ‘power’ penalty or a ‘placement’ penalty. The current study investigated how the type of penalty kick being taken affected the kicker’s visual search strategy and where the ball hit the goal (end ball location). Method Wearing a portable eye tracker, 12 university footballers executed 2 power and placement penalty kicks, indoors, both with and without the presence of a goalkeeper. Video cameras were used to determine initial ball velocity and end ball location. Results When taking the power penalty, the football was kicked significantly harder and more centrally in the goal compared to the placement penalty. During the power penalty, players fixated on the football for longer and more often at the goalkeeper (and by implication the middle of the goal), whereas in the placement penalty, fixated longer at the goal, specifically the edges. Findings remained consistent irrespective of goalkeeper presence. Discussion/conclusion Findings indicate differences in visual search strategy and end ball location as a function of type of penalty kick. When taking the placement penalty, players fixated and kicked the football to the edges of the goal in an attempt to direct the ball to an area that the goalkeeper would have difficulty reaching and saving. Fixating significantly longer on the football when taking the power compared to placement penalty indicates a greater importance of obtaining visual information from the football. This can be attributed to ensuring accurate foot-to-ball contact and subsequent generation of ball velocity. Aligning gaze and kicking the football centrally in the goal when executing the power compared to placement penalty may have been a strategy to reduce the risk of kicking wide of the goal altogether. PMID:25517405

  19. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search

    PubMed Central

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  20. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search.

    PubMed

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  1. Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.

    PubMed

    Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen

    2016-01-01

    Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. PMID:26049037

  2. Are summary statistics enough? Evidence for the importance of shape in guiding visual search

    PubMed Central

    Alexander, Robert G.; Schmidt, Joseph; Zelinsky, Gregory J.

    2015-01-01

    Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search. PMID:26180505

  3. SVM-based visual-search model observers for PET tumor detection

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.; Sen, Anando; Azencott, Robert

    2015-03-01

    Many search-capable model observers follow task paradigms that specify clinically unrealistic prior knowledge about the anatomical backgrounds in study images. Visual-search (VS) observers, which implement distinct, feature-based candidate search and analysis stages, may provide a means of avoiding such paradigms. However, VS observers that conduct single-feature analysis have not been reliable in the absence of any background information. We investigated whether a VS observer based on multifeature analysis can overcome this background dependence. The testbed was a localization ROC (LROC) study with simulated whole-body PET images. Four target-dependent morphological features were defined in terms of 2D cross-correlations involving a known tumor profile and the test image. The feature values at the candidate locations in a set of training images were fed to a support-vector machine (SVM) to compute a linear discriminant that classified locations as tumor-present or tumor-absent. The LROC performance of this SVM-based VS observer was compared against the performances of human observers and a pair of existing model observers.

  4. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  5. Parietal blood oxygenation level-dependent response evoked by covert visual search reflects set-size effect in monkeys.

    PubMed

    Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P

    2014-03-01

    Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. PMID:24279771

  6. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    SciTech Connect

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-10-09

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASAWorld Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the hot spot areas of research. Most importantly, our methods demonstrate an easy way to geovisualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  7. Visual recognition based on temporal cortex cells: viewer-centred processing of pattern configuration.

    PubMed

    Perrett, D I; Oram, M W

    1998-01-01

    A model of recognition is described based on cell properties in the ventral cortical stream of visual processing in the primate brain. At a critical intermediate stage in this system, 'Elaborate' feature sensitive cells respond selectively to visual features in a way that depends on size (+/- 1 octave), orientation (+/- 45 degrees) but does not depend on position within central vision (+/- 5 degrees). These features are simple conjunctions of 2-D elements (e.g. a horizontal dark area above a dark smoothly convex area). They can arise either as elements of an object's surface pattern or as a 3-D component bounded by an object's external contour. By requiring a combination of several such features without regard to their position within the central region of the visual image, 'Pattern' sensitive cells at higher levels can exhibit selectivity for complex configurations that typify objects seen under particular viewing conditions. Given that input features to such Pattern sensitive cells are specified in approximate size and orientation, initial cellular 'representations' of the visual appearance of object type (or object example) are also selective for orientation and size. At this level, sensitivity to object view (+/- 60 degrees) arises because visual features disappear as objects are rotated in perspective. Processing is thus viewer-centred and the neurones only respond to objects seen from particular viewing conditions or 'object instances'. Combined sensitivity to multiple features (conjunctions of elements) independent of their position, establishes selectivity for the configurations of object parts (from one view) because rearranged configurations of the same parts yield images lacking some of the 2-D visual features present in the normal configuration. Different neural populations appear to be selectively tuned to particular components of the same biological object (e.g. face, eyes, hands, legs), perhaps because the independent articulation of these components gives rise to correlated activity in different sets of input visual features. Generalisation over viewing conditions for a given object can be established by hierarchically pooling outputs of view-condition specific cells with pooling operations dependent on the continuity in experience across viewing conditions. Different object parts are seen together and different views are seen in succession when the observer walks around the object. The view specific coding that characterises the selectivity of cells in the temporal lobe can be seen as a natural consequence of selective experience of objects from particular vantage points. View specific coding for the face and body also has great utility in understanding complex social signals, a property that may not be feasible with object-centred processing. PMID:9755511

  8. Responses of neurones in the cat's visual cerebral cortex to relative movement of patterns

    PubMed Central

    Burns, B. Delisle; Gassanov, U.; Webb, A. C.

    1972-01-01

    1. We have investigated the responses of single neurones in the visual cerebral cortex of the unanaesthetized, isolated cat's forebrain to excitation of one retina with patterned light. The responses of twenty-six cells to the relative movement of two patterns in the visual field have been recorded. 2. We used several forms of relative movement for stimulation, but all of them involved a change in the separation of two parallel and straight light-dark edges. 3. Responses to this form of stimulation were compared with the responses of the same cells to simple movement, that is, movement of the same patterns without change of distance between their borders. 4. All cells showed a response to relative movement that differed from their response to simple movement. 5. The time-locked phasic response differed in 54% of the cells tested. Of cells responding in this way, 83% of tests produced an increased phasic response. 6. Relative movement brought about changes in the mean frequency of discharge in 96% of the cells tested. 82% of these cells responded with an increased rate of firing. 7. Movement relative to a coarse background pattern affected more neurones and produced a greater change in their behaviour than did movement relative to a fine-grained pattern. 8. The neurones tested represented the central part of the visual field (0-10°); while all were affected by relative movement, those representing points furthest from the optic axis appeared to be most susceptible (we found no correlation between size of receptive field and distance from the optic axis). PMID:5083167

  9. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  10. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  11. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face

    PubMed Central

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers. PMID:25120420

  12. Prognostic significance of the pattern visual evoked potential in ocular hypertension.

    PubMed Central

    Bray, L C; Mitchell, K W; Howe, J W

    1991-01-01

    This paper reports a prospective study on 49 ocular hypertensive patients to evaluate the prognostic significance of transient abnormalities in the pattern visual evoked potential (VEP) in the development of glaucoma. Seven of 24 patients with VEP abnormalities at diagnosis of ocular hypertension developed glaucomatous field defects in the follow-up period as compared with none of 25 patients with normal VEPs at diagnosis. We conclude that appropriately designed pattern VEP testing is a valuable complement to careful (preferably computerised, static) perimetry. In addition, our findings support the contention that, in glaucomatous disease of the optic nerve, rudimentary pattern processing mechanisms--that is 'Y'-type units of the magnocellular pathways--may be affected earlier than luminance processing mechanisms. PMID:1995048

  13. Age and distraction are determinants of performance on a novel visual search task in aged Beagle dogs.

    PubMed

    Snigdha, Shikha; Christie, Lori-Ann; De Rivera, Christina; Araujo, Joseph A; Milgram, Norton W; Cotman, Carl W

    2012-02-01

    Aging has been shown to disrupt performance on tasks that require intact visual search and discrimination abilities in human studies. The goal of the present study was to determine if canines show age-related decline in their ability to perform a novel simultaneous visual search task. Three groups of canines were included: a young group (N?=?10; 3 to 4.5 years), an old group (N?=?10; 8 to 9.5 years), and a senior group (N?=?8; 11 to 15.3 years). Subjects were first tested for their ability to learn a simple two-choice discrimination task, followed by the visual search task. Attentional demands in the task were manipulated by varying the number of distracter items; dogs received an equal number of trials with either zero, one, two, or three distracters. Performance on the two-choice discrimination task varied with age, with senior canines making significantly more errors than the young. Performance accuracy on the visual search task also varied with age; senior animals were significantly impaired compared to both the young and old, and old canines were intermediate in performance between young and senior. Accuracy decreased significantly with added distracters in all age groups. These results suggest that aging impairs the ability of canines to discriminate between task-relevant and -irrelevant stimuli. This is likely to be derived from impairments in cognitive domains such as visual memory and learning and selective attention. PMID:21336566

  14. Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location.

    PubMed

    Pollmann, Stefan; Eštočinová, Jana; Sommer, Susanne; Chelazzi, Leonardo; Zinke, Wolf

    2016-01-01

    Spatial contextual cueing reflects an incidental form of learning that occurs when spatial distractor configurations are repeated in visual search displays. Recently, it was reported that the efficiency of contextual cueing can be modulated by reward. We replicated this behavioral finding and investigated its neural basis with fMRI. Reward value was associated with repeated displays in a learning session. The effect of reward value on context-guided visual search was assessed in a subsequent fMRI session without reward. Structures known to support explicit reward valuation, such as ventral frontomedial cortex and posterior cingulate cortex, were modulated by incidental reward learning. Contextual cueing, leading to more efficient search, went along with decreased activation in the visual search network. Retrosplenial cortex played a special role in that it showed both a main effect of reward and a reward×configuration interaction and may thereby be a central structure for the reward modulation of context-guided visual search. PMID:26427645

  15. iRaster: a novel information visualization tool to explore spatiotemporal patterns in multiple spike trains.

    PubMed

    Somerville, J; Stuart, L; Sernagor, E; Borisyuk, R

    2010-12-15

    Over the last few years, simultaneous recordings of multiple spike trains have become widely used by neuroscientists. Therefore, it is important to develop new tools for analysing multiple spike trains in order to gain new insight into the function of neural systems. This paper describes how techniques from the field of visual analytics can be used to reveal specific patterns of neural activity. An interactive raster plot called iRaster has been developed. This software incorporates a selection of statistical procedures for visualization and flexible manipulations with multiple spike trains. For example, there are several procedures for the re-ordering of spike trains which can be used to unmask activity propagation, spiking synchronization, and many other important features of multiple spike train activity. Additionally, iRaster includes a rate representation of neural activity, a combined representation of rate and spikes, spike train removal and time interval removal. Furthermore, it provides multiple coordinated views, time and spike train zooming windows, a fisheye lens distortion, and dissemination facilities. iRaster is a user friendly, interactive, flexible tool which supports a broad range of visual representations. This tool has been successfully used to analyse both synthetic and experimentally recorded datasets. In this paper, the main features of iRaster are described and its performance and effectiveness are demonstrated using various types of data including experimental multi-electrode array recordings from the ganglion cell layer in mouse retina. iRaster is part of an ongoing research project called VISA (Visualization of Inter-Spike Associations) at the Visualization Lab in the University of Plymouth. The overall aim of the VISA project is to provide neuroscientists with the ability to freely explore and analyse their data. The software is freely available from the Visualization Lab website (see www.plymouth.ac.uk/infovis). PMID:20875457

  16. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data.

    PubMed

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  17. HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

    2013-12-01

    Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds. This allows the user to quickly explore each image, ascertain the main units of surface material, localize outliers, and develop an understanding of the various materials' spectral characteristics. The HSIFind software suite is currently in beta testing at the Planetary Science Institute and a process is underway to release it under an open source license to the broader community. We believe it will benefit instrument operations during remote planetary exploration, where tactical mission decisions demand rapid analysis of each new dataset. The approach also holds potential for public spectral catalogs where its shallow learning curve and portability can make these datasets accessible to a much wider range of researchers. Acknowledgements: The HSIFind project acknowledges the NASA Advanced MultiMission Operating System (AMMOS) and the Multimission Ground Support Services (MGSS). E. Palmer is with the Planetary Science Institute, Tucson, AZ. Other authors are with the Jet Propulsion Laboratory, Pasadena, CA. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Copyright 2013, California Institute of Technology.

  18. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data

    PubMed Central

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  19. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica.

    PubMed

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675

  20. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica

    PubMed Central

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675