These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Effect of mammographic breast density on radiologists' visual search pattern  

NASA Astrophysics Data System (ADS)

This study investigates the impact of breast density on visual searching pattern. A set of 74 one-view malignancy containing mammographic images were examined by 7 radiologists. Eye position was recorded and visual search parameters such as total time examining a case, time to hit the lesion, dwell time and number of hits per area were collected. Fixations were calculated in 3 areas of interests: background breast parenchyma, dense areas of parenchyma and lesion. Significant increases in dwell time and number of hits in dense areas of parenchyma were noted for highcompared to low- mammographic density images when the lesion overlay the fibroglandular tissue (p<0.01). When the lesion was outside the fibroglandular tissue, significant increase in dwell time and number of hits in dense areas of parenchyma in high- compared to low- mammographic density images were observed (p<0.01). No significant differences have been found in total time examining a case, time to first fixate the lesion, dwell time and number of hits in background breast parenchyma and lesion areas. In conclusion, our data suggests that dense areas of breast parenchyma attract radiologists' visual attention. Lesions overlaying the fibroglandular tissue were detected faster, therefore lesion location, whether overlaying or outside the fibroglandular tissue, appeared to have an impact on radiologists' visual searching pattern.

Al Mousa, Dana S.; Brennan, Patrick C.; Ryan, Elaine A.; Lee, Warwick B.; Pietrzyk, Mariusz W.; Reed, Warren M.; Alakhras, Maram M.; Li, Yanpeng; Mello-Thoms, Claudia

2014-03-01

2

Understanding visual search patterns of dermatologists assessing pigmented skin lesions before and after online training.  

PubMed

The goal of this investigation was to explore the feasibility of characterizing the visual search characteristics of dermatologists evaluating images corresponding to single pigmented skin lesions (PSLs) (close-ups and dermoscopy) as a venue to improve training programs for dermoscopy. Two Board-certified dermatologists and two dermatology residents participated in a phased study. In phase I, they viewed a series of 20 PSL cases ranging from benign nevi to melanoma. The close-up and dermoscopy images of the PSL were evaluated sequentially and rated individually as benign or malignant, while eye position was recorded. Subsequently, the participating subjects completed an online dermoscopy training module that included a pre- and post-test assessing their dermoscopy skills (phase 2). Three months later, the subjects repeated their assessment on the 20 PSLs presented during phase I of the study. Significant differences in viewing time and eye-position parameters were observed as a function of level of expertise. Dermatologists overall have more efficient search than residents generating fewer fixations with shorter dwells. Fixations and dwells associated with decisions changing from benign to malignant or vice versa from photo to dermatoscopic viewing were longer than any other decision, indicating increased visual processing for those decisions. These differences in visual search may have implications for developing tools to teach dermatologists and residents about how to better utilize dermoscopy in clinical practice. PMID:24939005

Krupinski, Elizabeth A; Chao, Joseph; Hofmann-Wellenhof, Rainer; Morrison, Lynne; Curiel-Lewandrowski, Clara

2014-12-01

3

Foveated visual search for corners.  

PubMed

We cast the problem of corner detection as a corner search process. We develop principles of foveated visual search and automated fixation selection to accomplish the corner search, supplying a case study of both foveated search and foveated feature detection. The result is a new algorithm for finding corners, which is also a corner-based algorithm for aiming computed foveated visual fixations. In the algorithm, long saccades move the fovea to previously unexplored areas of the image, while short saccades improve the accuracy of putative corner locations. The system is tested on two natural scenes. As an interesting comparison study, we compare fixations generated by the algorithm with those of subjects viewing the same images, whose eye movements are being recorded by an eye tracker. The comparison of fixation patterns is made using an information-theoretic measure. Results show that the algorithm is a good locater of corners, but does not correlate particularly well with human visual fixations. PMID:17357739

Arnow, Thomas L; Bovik, Alan Conrad

2007-03-01

4

Visual Search Jeremy M. Wolfe  

E-print Network

and red, both features can guide attention. Some of the rules of the human visual search engine10 Visual Search Jeremy M. Wolfe Abstract This chapter considers the range of visual search tasks, from those involving very brief- ly presented stimuli to those involving search processes that extend

5

Foveated Visual Search for Corners  

Microsoft Academic Search

We cast the problem of corner detection as a corner search process. We develop principles of foveated visual search and automated fixation selection to accomplish the corner search, supplying a case study of both foveated search and foveated feature detection. The result is a new algorithm for finding corners which is also a corner-based algorithm for aiming computed foveated visual

Thomas L. Arnow; Alan Conrad Bovik

2007-01-01

6

Introspection during visual search.  

PubMed

Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. PMID:25286130

Reyes, Gabriel; Sackur, Jérôme

2014-10-01

7

Visual Pattern Discrimination  

Microsoft Academic Search

Visual discrimination experiments were conducted using unfamiliar displays generated by a digital computer. The displays contained two side-by-side fields with different statistical, topological or heuristic properties. Discrimination was defined as that spontaneous visual process which gives the immediate impression of two distinct fields. The condition for such discrimination was found to be based primarily on clusters or lines formed by

B. Julesz

1962-01-01

8

Visual search in patients with left visual hemineglect  

Microsoft Academic Search

In patients with hemi-spatial neglect eye movement patterns during visual search reflect not only inattention for the contralesional hemi-field, but interacting deficits of multiple visuo-spatial and cognitive functions, even in the ipsilesional hemi-field. Evidence for these deficits is presented from the literature and from saccadic scan-path analysis during feature and conjunction search in 10 healthy subjects and in 10 patients

A. Sprenger; D. Kömpf; W. Heide

2002-01-01

9

Visual Search Remains Efficient when Visual Working Memory is Full  

Microsoft Academic Search

Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load

Geoffrey F. Woodman; Edward K. Vogel; Steven J. Luck

2001-01-01

10

Parallel Processing in Visual Search Asymmetry  

ERIC Educational Resources Information Center

The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

2004-01-01

11

Learning in repeated visual search  

PubMed Central

Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance. PMID:20601709

Hout, Michael C.; Goldinger, Stephen D.

2014-01-01

12

Visual similarity effects in categorical search.  

PubMed

We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. PMID:21757505

Alexander, Robert G; Zelinsky, Gregory J

2011-01-01

13

Cortical Substrates Supporting Visual Search in Humans  

E-print Network

Cortical Substrates Supporting Visual Search in Humans Mirjam Eglin,1 Lynn C. Robertson,12 Administration Medical Center, Martinez, California 94553 Serial and parallel visual search tasks were presented types of visual search performance can be distinguished. At one extreme, search for simple vi- sual

Robertson, Lynn

14

Visual Search and Reading.  

ERIC Educational Resources Information Center

The effect on reading speed of the number of target items being searched for and the number of target occurrences in the text was examined. The subjects, 24 college undergraduate volunteers, were presented with a list of target words, and then they read a passage for comprehension which contained occurrences of the target words (Experiment1) or…

Calfee, Robert C.; Jameson, Penny

15

Searching for inefficiency in visual search.  

PubMed

The time required to find an object of interest in the visual field often increases as a function of the number of items present. This increase or inefficiency was originally interpreted as evidence for the serial allocation of attention to potential target items, but controversy has ensued for decades. We investigated this issue by recording ERPs from humans searching for a target in displays containing several differently colored items. Search inefficiency was ascribed not to serial search but to the time required to selectively process the target once found. Additionally, less time was required for the target to "pop out" from the rest of the display when the color of the target repeated across trials. These findings indicate that task relevance can cause otherwise inconspicuous items to pop out and highlight the need for direct neurophysiological measures when investigating the causes of search inefficiency. PMID:25203277

Christie, Gregory J; Livingstone, Ashley C; McDonald, John J

2015-01-01

16

Guidance of Visual Search by Preattentive Information  

E-print Network

CHAPTER 17 Guidance of Visual Search by Preattentive Information Jeremy M. Wolfe 101 ABSTRACT When and dimensions are on the list on the basis of rather few data. Much evidence comes from visual search tasks OF VISUAL SEARCH BY PREATTENTIVE INFORMATION SECTION I. FOUNDATIONS slope of these functions should be near

17

Development of a Computerized Visual Search Test  

ERIC Educational Resources Information Center

Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

Reid, Denise; Babani, Harsha; Jon, Eugenia

2009-01-01

18

Innate Visual Learning through Spontaneous Activity Patterns  

Microsoft Academic Search

Patterns of spontaneous activity in the developing retina, LGN, and cortex are necessary for the proper development of visual cortex. With these patterns intact, the primary visual cortices of many newborn animals develop properties similar to those of the adult cortex but without the training benefit of visual experience. Previous models have demonstrated how V1 responses can be initialized through

Mark V. Albert; Adam Schnabel; David J. Field

2008-01-01

19

Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?  

E-print Network

Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Jeremy M. Wolfe Harvard Medical School and Brigham and Women's Hospital Multielement visual tracking and visual search are 2 tasks that are held to require visual­spatial attention. The authors used

20

Visual Analysis of Historic Hotel Visitation Patterns  

Microsoft Academic Search

Understanding the space and time characteristics of human interac- tion in complex social networks is a critical component of visual tools for intelligence analysis, consumer behavior analysis, and hu- man geography. Visual identification and comparison of patterns of recurring events is an essential feature of such tools. In this pa- per, we describe a tool for exploring hotel visitation patterns

Chris Weaver; David Fyfe; Anthony Robinson; Deryck Holdsworth; Donna Peuquet; Alan M. MacEachren

2006-01-01

21

Visual function and pattern visual evoked response in optic neuritis.  

PubMed Central

The disparity between clinical visual function and pattern visual evoked response (VER) was studied in 53 patients who had suffered an attack of optic neuritis (ON) more than six months before. The visual functions tested included Snellen visual acuity, colour vision, visual field, and contrast sensitivity. The effect of pattern presentation, check size, and luminance was tested by recording VERs with several stimulus configurations. VER amplitudes were found to be associated with the outcome of all four clinical tests, independently of check size, luminance, or the presentation method used. On the other hand VER latencies were hardly ever related to the results of any of the four clinical visual tests. These findings support the idea that VER amplitude provides information about visual spatial perception, while VER latency is more related to the extent of demyelination. PMID:3651376

Sanders, E A; Volkers, A C; van der Poel, J C; van Lith, G H

1987-01-01

22

Running Head: Cognitive strategies for hierarchical visual search Cognitive strategies for the visual search  

E-print Network

Running Head: Cognitive strategies for hierarchical visual search Cognitive strategies for the visual search of hierarchical computer displays Anthony J. Hornof Department of Computer and Information of the target. The models demonstrate that human visual search performance can be explained largely in terms

Hornof, Anthony

23

Words, shape, visual search and visual working memory in 3-year-old children.  

PubMed

Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

Vales, Catarina; Smith, Linda B

2015-01-01

24

INTRODUCTION Visual search paradigms, in which individuals  

E-print Network

INTRODUCTION Visual search paradigms, in which individuals search for a pre-defined target of their body. The deficit may affect all sensory modalities, including contralateral visual, auditory, somatosensory and olfactory inputs. The presence of neglect may also adversely affect manual and oculomotor

Behrmann, Marlene

25

Attentional misguidance in visual search.  

PubMed

Previous research has shown that a task-irrelevant sudden onset of an object will capture an observer's visual attention or draw it to that object (e.g., Yantis & Jonides, 1984). However, further research has demonstrated the apparent inability of an object with a task-irrelevant but unique color or luminance to capture attention (Jonides & Yantis, 1988). In the experiments reported here, we reexplore the question of whether task-irrelevant properties other than sudden onset may capture attention. Our results suggest that uniquely colored or luminous objects, as well as salient though irrelevant boundaries, do not appear to capture attention. However, these irrelevant features do appear to serve as landmarks for a top-down search strategy which becomes increasingly likely with larger display set sizes. These findings are described in terms of stimulus-driven and goal-directed aspects of attentional control. PMID:7971120

Todd, S; Kramer, A F

1994-08-01

26

The impact of item clustering on visual search: It all depends on the nature of the visual search  

E-print Network

The impact of item clustering on visual search: It all depends on the nature of the visual search visual search performance. In this study, I manipulated item clustering in search displays. In an easy configuration search. Together, these results show that item clustering significantly affects visual search

Xu, Yaoda

27

Visual Search for Faces with Emotional Expressions  

ERIC Educational Resources Information Center

The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

2008-01-01

28

Running Head: LABELED VISUAL HIERARCHIES Visual search and mouse pointing in labeled versus  

E-print Network

models of visual search and manual selectiRunning Head: LABELED VISUAL HIERARCHIES Visual search and mouse pointing in labeled versus unlabeled two-dimensional visual hierarchies Anthony J. Hornof Department of Computer and Information

Hornof, Anthony

29

Cascade category-aware visual search.  

PubMed

Incorporating image classification into image retrieval system brings many attractive advantages. For instance, the search space can be narrowed down by rejecting images in irrelevant categories of the query. The retrieved images can be more consistent in semantics by indexing and returning images in the relevant categories together. However, due to their different goals on recognition accuracy and retrieval scalability, it is hard to efficiently incorporate most image classification works into large-scale image search. To study this problem, we propose cascade category-aware visual search, which utilizes weak category clue to achieve better retrieval accuracy, efficiency, and memory consumption. To capture the category and visual clues of an image, we first learn category-visual words, which are discriminative and repeatable local features labeled with categories. By identifying category-visual words in database images, we are able to discard noisy local features and extract image visual and category clues, which are hence recorded in a hierarchical index structure. Our retrieval system narrows down the search space by: 1) filtering the noisy local features in query; 2) rejecting irrelevant categories in database; and 3) preforming discriminative visual search in relevant categories. The proposed algorithm is tested on object search, landmark search, and large-scale similar image search on the large-scale LSVRC10 data set. Although the category clue introduced is weak, our algorithm still shows substantial advantages in retrieval accuracy, efficiency, and memory consumption than the state-of-the-art. PMID:24760907

Zhang, Shiliang; Tian, Qi; Huang, Qingming; Gao, Wen; Rui, Yong

2014-06-01

30

Urban camouflage assessment through visual search and computational saliency  

NASA Astrophysics Data System (ADS)

We present a new method to derive a multiscale urban camouflage pattern from a given set of background image samples. We applied this method to design a camouflage pattern for a given (semi-arid) urban environment. We performed a human visual search experiment and a computational evaluation study to assess the effectiveness of this multiscale camouflage pattern relative to the performance of 10 other (multiscale, disruptive and monotonous) patterns that were also designed for deployment in the same operating theater. The results show that the pattern combines the overall lowest detection probability with an average mean search time. We also show that a frequency-tuned saliency metric predicts human observer performance to an appreciable extent. This computational metric can therefore be incorporated in the design process to optimize the effectiveness of camouflage patterns derived from a set of background samples.

Toet, Alexander; Hogervorst, Maarten A.

2013-04-01

31

Pattern Search Algorithms for Bound Constrained Minimization  

NASA Technical Reports Server (NTRS)

We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

Lewis, Robert Michael; Torczon, Virginia

1996-01-01

32

Modeling spatial patterns in the visual cortex  

NASA Astrophysics Data System (ADS)

We propose a model for the formation of patterns in the visual cortex. The dynamical units of the model are Kuramoto phase oscillators that interact through a complex network structure embedded in two dimensions. In this way the strength of the interactions takes into account the geographical distance between units. We show that for different parameters, clustered or striped patterns emerge. Using the structure factor as an order parameter we are able to quantitatively characterize these patterns and present a phase diagram. Finally, we show that the model is able to reproduce patterns with cardinal preference, as observed in ferrets.

Daza C., Yudy Carolina; Tauro, Carolina B.; Tamarit, Francisco A.; Gleiser, Pablo M.

2014-10-01

33

The Effects of Semantic Grouping on Visual Search  

E-print Network

The Effects of Semantic Grouping on Visual Search Abstract This paper reports on work while searching visual layouts containing words that are either grouped by category (i.e. semantically search to better predict users visual interaction with interfaces. Keywords Visual search, semantics

Hornof, Anthony

34

Predicting Cognitive Strategies and Eye Movements in Hierarchical Visual Search  

E-print Network

Predicting Cognitive Strategies and Eye Movements in Hierarchical Visual Search Anthony J. Hornof cognitive modeling of visual search, and the synergistic relationship between cognitive modeling and eye in the visual search of a hierarchical layout. Two types of visual layouts are searched: unlabeled layouts

Hornof, Anthony

35

An Information Theoretic Model of Saliency and Visual Search  

E-print Network

An Information Theoretic Model of Saliency and Visual Search Neil D.B. Bruce and John K. Tsotsos visual search tasks, including many for which only specialized models have had success. As a whole: Attention, Visual Search, Saliency, Information Theory, Fix- ation, Entropy. 1 Introduction Visual search

36

Predicting Cognitive Strategies and Eye Movements in Hierarchical Visual Search  

E-print Network

Predicting Cognitive Strategies and Eye Movements in Hierarchical Visual Search Anthony J. Hornof cognitive modeling of visual search, and the synergistic relationship between cognitive modeling and eye involved in the visual search of a hierarchical layout. Two types of visual layouts are searched: unlabeled

Hornof, Anthony

37

Large Visual Repository Search with Hash Collision Design Optimization  

E-print Network

Large Visual Repository Search with Hash Collision Design Optimization Visual search over large- ples of this new wave of applications enabled by large visual search capabilities. In these appli of this technology that enables large- scale visual search: indexing (or hashing). Index- ing is the process

38

Superior Visual Search in Adults with Autism  

ERIC Educational Resources Information Center

Recent studies have suggested that children with autism perform better than matched controls on visual search tasks and that this stems from a superior visual discrimination ability. This study assessed whether these findings generalize from children to adults with autism. Experiments 1 and 2 showed that, like children, adults with autism were…

O'Riordan, Michelle

2004-01-01

39

Turning visual search time on its head.  

PubMed

Our everyday visual experience frequently involves searching for objects in clutter. Why are some searches easy and others hard? It is generally believed that the time taken to find a target increases as it becomes similar to its surrounding distractors. Here, I show that while this is qualitatively true, the exact relationship is in fact not linear. In a simple search experiment, when subjects searched for a bar differing in orientation from its distractors, search time was inversely proportional to the angular difference in orientation. Thus, rather than taking search reaction time (RT) to be a measure of target-distractor similarity, we can literally turn search time on its head (i.e. take its reciprocal 1/RT) to obtain a measure of search dissimilarity that varies linearly over a large range of target-distractor differences. I show that this dissimilarity measure has the properties of a distance metric, and report two interesting insights come from this measure: First, for a large number of searches, search asymmetries are relatively rare and when they do occur, differ by a fixed distance. Second, search distances can be used to elucidate object representations that underlie search - for example, these representations are roughly invariant to three-dimensional view. Finally, search distance has a straightforward interpretation in the context of accumulator models of search, where it is proportional to the discriminative signal that is integrated to produce a response. This is consistent with recent studies that have linked this distance to neuronal discriminability in visual cortex. Thus, while search time remains the more direct measure of visual search, its reciprocal also has the potential for interesting and novel insights. PMID:22561524

Arun, S P

2012-12-01

40

Perceptual Encoding Efficiency in Visual Search  

ERIC Educational Resources Information Center

The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

Rauschenberger, Robert; Yantis, Steven

2006-01-01

41

The Search for Optimal Visual Stimuli  

NASA Technical Reports Server (NTRS)

In 1983, Watson, Barlow and Robson published a brief report in which they explored the relative visibility of targets that varied in size, shape, spatial frequency, speed, and duration (referred to subsequently here as WBR). A novel aspect of that paper was that visibility was quantified in terms of threshold contrast energy, rather than contrast. As they noted, this provides a more direct measure of the efficiency with which various patterns are detected, and may be more edifying as to the underlying detection machinery. For example, under certain simple assumptions, the waveform of the most efficiently detected signal is an estimate of the receptive field of the visual system's most efficient detector. Thus one goal of their experiment Basuto search for the stimulus that the 'eye sees best'. Parenthetically, the search for optimal stimuli may be seen as the most general and sophisticated variant of the traditional 'subthreshold summation' experiment, in which one measures the effect upon visibility of small probes combined with a base stimulus.

Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

1997-01-01

42

Running head: SACCADIC SELECTIVITY DURING VISUAL SEARCH Saccadic selectivity during visual search: The influence of central processing difficulty  

E-print Network

Running head: SACCADIC SELECTIVITY DURING VISUAL SEARCH Saccadic selectivity during visual search of central discrimination and the efficiency of peripheral selection in visual search tasks. Participants-contingent moving mask (Experiment 3). Although both manipulations substantially degraded the overall visual search

Pomplun, Marc

43

Visual scan adaptation during repeated visual search Air Force Research Laboratory, Dayton, OH, USAChristopher W. Myers  

E-print Network

Visual scan adaptation during repeated visual search Air Force Research Laboratory, Dayton, OH. Gray There is no consensus as to how to characterize eye fixations during visual search. On the one are reported that demonstrate the repetition and adaptation of visual scans during visual search, supporting

Gray, Wayne

44

Visual Testing: Searching for Guidelines.  

ERIC Educational Resources Information Center

An experiment was conducted to investigate the influence of the variables "realism" and "context" on the performance of biology students on a visual test about the anatomy of a rat. The instruction was primarily visual with additional verbal information like Latin names and practical information about the learning task: dissecting a rat to gain…

Van Gendt, Kitty; Verhagen, Plon

45

Universality in visual cortical pattern formation.  

PubMed

During ontogenetic development, the visual cortical circuitry is remodeled by activity-dependent mechanisms of synaptic plasticity. From a dynamical systems perspective this is a process of dynamic pattern formation. The emerging cortical network supports functional activity patterns that are used to guide the further improvement of the network's structure. In this picture, spontaneous symmetry breaking in the developmental dynamics of the cortical network underlies the emergence of cortical selectivities such as orientation preference. Here universal properties of this process depending only on basic biological symmetries of the cortical network are analyzed. In particular, we discuss the description of the development of orientation preference columns in terms of a dynamics of abstract order parameter fields, connect this description to the theory of Gaussian random fields, and show how the theory of Gaussian random fields can be used to obtain quantitative information on the generation and motion of pinwheels, in the two dimensional pattern of visual cortical orientation columns. PMID:14766145

Wolf, F; Geisel, T

2003-01-01

46

RESEARCH REPORT Visual search and foraging compared in a large-scale search task  

E-print Network

RESEARCH REPORT Visual search and foraging compared in a large-scale search task Alastair D. Smith-Verlag 2008 Abstract It has been argued that visual search is a valid model for human foraging. However describe a direct comparison between visually guided searches (as studied in visual search tasks

Gilchrist, Iain D.

47

Searching through subsets: A test of the Visual Indexing Hypothesis  

E-print Network

Searching through subsets: A test of the Visual Indexing Hypothesis Burkell, Jacquelyn A in selecting objects for visual processing. #12;1 Searching through subsets Burkell & Pylyshyn This paper is concerned with exploring certain visual phenomena (particularly involving visual search) which suggest

Pylyshyn, Zenon

48

NOLLE CARBONELL, SUZANNE KIEFFER DO ORAL MESSAGES HELP VISUAL SEARCH?  

E-print Network

1 NOÃ?LLE CARBONELL, SUZANNE KIEFFER DO ORAL MESSAGES HELP VISUAL SEARCH? Abstract. A preliminary visual search tasks on crowded visual displays. Results of quantitative and qualitative analyses suggest-ranking ratings from most subjects. Keywords. multimodal interaction, multimedia presentations, visual search

Paris-Sud XI, Université de

49

Saliency, attention, and visual search: An information theoretic approach  

E-print Network

Saliency, attention, and visual search: An information theoretic approach Department of Computer that a variety of visual search behaviors appear as emergent properties of the model and therefore basic: saliency, visual attention, visual search, eye movements, information theory, efficient coding, pop

50

Self terminating, guided or accumulator models of visual search  

E-print Network

1 Self terminating, guided or accumulator models of visual search: Evidence from stimulus inversion? Efficient search Blonde? #12;2 Accumulator models & visual search · Information acquisition hypothesis ­ E models & visual search · Data from macaque temporal lobe ­ Individual cell response profile (Oram

Oram, Mike

51

Guidance of Visual Search by Memory and Knowledge  

E-print Network

Guidance of Visual Search by Memory and Knowledge Andrew Hollingworth Abstract To behave they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes

Hollingworth, Andrew

52

Short-term perceptual learning in visual conjunction search.  

PubMed

Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search. PMID:24730740

Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

2014-08-01

53

Visual Pattern Recognition in Drosophila Is Invariant for  

E-print Network

Visual Pattern Recognition in Drosophila Is Invariant for Retinal Position Shiming Tang,1 of their visual field where they had originally seen them. Tethered flies (Drosophila melanogaster) in a flight simulator can rec- ognize visual patterns. Because their eyes are fixed in space and patterns can

Field, David

54

Residual enhanced visual vector as a compact signature for mobile visual search  

E-print Network

Residual enhanced visual vector as a compact signature for mobile visual search David Chen a Received in revised form 27 April 2012 Accepted 6 June 2012 Keywords: Mobile visual search Compact signatures Database compression a b s t r a c t Many mobile visual search (MVS) systems transmit query data

Girod, Bernd

55

Visual memory for natural scenes: Evidence from change detection and visual search  

E-print Network

Visual memory for natural scenes: Evidence from change detection and visual search Andrew memory in scene perception and visual search. Recent theories in these literatures have held to the fore: Scene perception and visual search. While viewing natural scenes, the eyes shift (via saccadic

Hollingworth, Andrew

56

On the Local Convergence of Pattern Search  

NASA Technical Reports Server (NTRS)

We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

2000-01-01

57

Pattern Search Methods for Linearly Constrained Minimization  

NASA Technical Reports Server (NTRS)

We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

Lewis, Robert Michael; Torczon, Virginia

1998-01-01

58

Visual search and natural color distributions  

NASA Astrophysics Data System (ADS)

We examined visual search for color within the distributions of colors that characterize natural images, by using a foraging task designed to mimic the problem of finding a fruit among foliage. Color distributions were taken from spectroradiometric measurements of outdoor scenes and used to define the colors of a dense background of ellipses. Search times were measured for locating test colors presented as a superposed circular target. Reaction times varied from high values for target colors within the distribution (where they are limited by serial search based on form) to asymptotically low values for colors far removed from the distribution (where targets pop out). The variation in reaction time follows the distribution of background contrasts but is substantially broader. In further experiments we assessed the color organization underlying visual search, and how search is influenced by contrast adaptation to the colors of the background. Asymmetries between blue-yellow and red-green backgrounds suggest that search times do not depend on the separable L-M and S- (L+M) dimensions of early postreceptoral color vision. Prior adaptation facilitates search over adaptation to a uniform background, while adaptation to an inappropriate background impedes search. Contrast adaptation may therefore enhance the salience of novel stimuli by partially discounting the ambient background.

Webster, Michael A.; Raker, Vincent E.; Malkoc, Gokhan

1998-07-01

59

Visual Search Connecting Your World  

E-print Network

this technology in cloud offerings so you can search your own multimedia. How did the idea hatch? As an innovator of Video and Multimedia Technologies Research at AT&T Labs. In this role, he leads an effort aimed at creating advanced media processing technologies and novel multimedia communications service concepts

Fisher, Kathleen

60

Dynamic Prototypicality Effects in Visual Search  

ERIC Educational Resources Information Center

In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…

Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan

2011-01-01

61

Visual Search of Food Nutrition Labels  

Microsoft Academic Search

Using an eye-tracking methodology, we evaluated food nutrition labels' ability to support rapid and accurate visual search for nutrition information. Participants (5 practiced label readers and 5 nonreaders) viewed 180 trials of nutrition labels on a computer, finding answers to questions (e.g., serving size). Label manipulations included several alternative line arrangements, location of the question target item, and label size.

Joseph H. Goldberg; Claudia K. Probart; Robert E. Zak

1999-01-01

62

Optimal eye movement strategies in visual search.  

PubMed

To perform visual search, humans, like many mammals, encode a large field of view with retinas having variable spatial resolution, and then use high-speed eye movements to direct the highest-resolution region, the fovea, towards potential target locations. Good search performance is essential for survival, and hence mammals may have evolved efficient strategies for selecting fixation locations. Here we address two questions: what are the optimal eye movement strategies for a foveated visual system faced with the problem of finding a target in a cluttered environment, and do humans employ optimal eye movement strategies during a search? We derive the ideal bayesian observer for search tasks in which a target is embedded at an unknown location within a random background that has the spectral characteristics of natural scenes. Our ideal searcher uses precise knowledge about the statistics of the scenes in which the target is embedded, and about its own visual system, to make eye movements that gain the most information about target location. We find that humans achieve nearly optimal search performance, even though humans integrate information poorly across fixations. Analysis of the ideal searcher reveals that there is little benefit from perfect integration across fixations--much more important is efficient processing of information on each fixation. Apparently, evolution has exploited this fact to achieve efficient eye movement strategies with minimal neural resources devoted to memory. PMID:15772663

Najemnik, Jiri; Geisler, Wilson S

2005-03-17

63

The Stanford Mobile Visual Search Data Set Vijay Chandrasekhar  

E-print Network

The Stanford Mobile Visual Search Data Set Vijay Chandrasekhar Stanford University, CA David M in computer vision litera- ture and point out their limitations for mobile visual search applications. To overcome many of the limitations, we pro- pose the Stanford Mobile Visual Search data set. The data set

Girod, Bernd

64

Feature Matching Performance of Compact Descriptors for Visual Search  

E-print Network

Feature Matching Performance of Compact Descriptors for Visual Search Vijay Chandrasekhar1 a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression- tors for visual search. For evaluating different compression schemes, we propose a data set of matching

Girod, Bernd

65

Cognitive Strategies for the Visual Search of Hierarchical  

E-print Network

Cognitive Strategies for the Visual Search of Hierarchical Computer Displays Anthony J. Hornof appearance of the target. The models demonstrate that human visual search performance can be explained in human­computer inter- action, cognitive modeling, visual search, and eye tracking; he is an Assistant

Hornof, Anthony

66

Mobile Visual Search: Architectures, Technologies, and the Emerging  

E-print Network

Mobile Visual Search: Architectures, Technologies, and the Emerging MPEG Standard Modern-era mobile to initiate search queries about objects in the user's visual proximity (see Figure 1). Such applications can, real estate, printed media, or art. First deploy- ments of mobile visual-search systems include Google

Girod, Bernd

67

Visual Search and Dual Tasks Reveal Two Distinct Attentional Resources  

E-print Network

Visual Search and Dual Tasks Reveal Two Distinct Attentional Resources Rufin VanRullen1, *, Lavanya will ``pop out'' from an array of distractors (``parallel'' visual search, e.g., color or orientation'' examination is needed in visual search. Attentional requirements are also frequently assessed by measuring

Koch, Christof

68

Saccadic selectivity in complex visual search displays Marc Pomplun*  

E-print Network

Saccadic selectivity in complex visual search displays Marc Pomplun* Department of Computer Science September 2005; received in revised form 2 December 2005 Abstract Visual search is a fundamental and routine task of everyday life. Studying visual search promises to shed light on the basic attentional

Pomplun, Marc

69

Vocal Dynamic Visual Pattern for voice characterization  

NASA Astrophysics Data System (ADS)

Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

2011-12-01

70

Personalized online information search and visualization  

PubMed Central

Background The rapid growth of online publications such as the Medline and other sources raises the questions how to get the relevant information efficiently. It is important, for a bench scientist, e.g., to monitor related publications constantly. It is also important, for a clinician, e.g., to access the patient records anywhere and anytime. Although time-consuming, this kind of searching procedure is usually similar and simple. Likely, it involves a search engine and a visualization interface. Different words or combination reflects different research topics. The objective of this study is to automate this tedious procedure by recording those words/terms in a database and online sources, and use the information for an automated search and retrieval. The retrieved information will be available anytime and anywhere through a secure web server. Results We developed such a database that stored searching terms, journals and et al., and implement a piece of software for searching the medical subject heading-indexed sources such as the Medline and other online sources automatically. The returned information were stored locally, as is, on a server and visible through a Web-based interface. The search was performed daily or otherwise scheduled and the users logon to the website anytime without typing any words. The system has potentials to retrieve similarly from non-medical subject heading-indexed literature or a privileged information source such as a clinical information system. The issues such as security, presentation and visualization of the retrieved information were thus addressed. One of the presentation issues such as wireless access was also experimented. A user survey showed that the personalized online searches saved time and increased and relevancy. Handheld devices could also be used to access the stored information but less satisfactory. Conclusion The Web-searching software or similar system has potential to be an efficient tool for both bench scientists and clinicians for their daily information needs. PMID:15766382

Chen, Dongquan; Orthner, Helmuth F; Sell, Susan M

2005-01-01

71

Visual search for objects in a complex visual context: what we wish to see  

E-print Network

1 Visual search for objects in a complex visual context: what we wish to see Hugo Boujut University,version1-6Feb2013 #12;Visual search for objects in a complex visual context: what we wish to see 5 1 ............................. 8 1.2.1.4 Bag-of-Visual-Words approaches ........................ 9 1.2.1.5 Improvements of Bag-of-Visual

Paris-Sud XI, Université de

72

Parallel Mechanisms for Visual Search in Zebrafish  

PubMed Central

Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

2014-01-01

73

Interrupted Visual Searches Reveal Volatile Search Memory Y. Jeremy Shen and Yuhong V. Jiang  

E-print Network

Interrupted Visual Searches Reveal Volatile Search Memory Y. Jeremy Shen and Yuhong V. Jiang Harvard University This study investigated memory from interrupted visual searches. Participants conducted was not. The authors suggest that spatial memory aids interrupted visual searches, but the use

Jiang, Yuhong

74

Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome  

ERIC Educational Resources Information Center

Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

2011-01-01

75

Recognition of facially expressed emotions and visual search strategies in adults with Asperger syndrome  

Microsoft Academic Search

Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed emotions were compared between 24 adults with Asperger syndrome and their matched controls. While wearing

Marita Falkmer; Anna Bjällmark; Matilda Larsson; Torbjörn Falkmer

2011-01-01

76

Guided Text Search Using Adaptive Visual Analytics  

SciTech Connect

This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

Steed, Chad A [ORNL; Symons, Christopher T [ORNL; Senter, James K [ORNL; DeNap, Frank A [ORNL

2012-10-01

77

Mining visual collocation patterns via self-supervised subspace learning.  

PubMed

Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively. PMID:22156999

Yuan, Junsong; Wu, Ying

2012-04-01

78

Visual search from lab to clinic and back  

NASA Astrophysics Data System (ADS)

Many of the tasks of medical image perception can be understood as demanding visual search tasks (especially if you happen to be a visual search researcher). Basic research on visual search can tell us quite a lot about how medical image search tasks proceed because even experts have to use the human "search engine" with all its limitations. Humans can only deploy attention to one or a very few items at any one time. Human search is "guided" search. Humans deploy their attention to likely target objects on the basis of the basic visual features of object and on the basis of an understanding of the scene containing those objects. This guidance operates in medical images as well as in the mundane scenes of everyday life. The paper reviews some of the dialogue between medical image perception by experts and visual search as studied in the laboratory.

Wolfe, Jeremy M.

2014-03-01

79

Meaning Metaphor for Visualizing Search Results Nicolas Bonnel  

E-print Network

Meaning Metaphor for Visualizing Search Results Nicolas Bonnel France Telecom, Division R&D 4 rue. Keywords: Search result visualization, 3D metaphors, Self-Organizing Maps, Adaptive interfaces. 1 Baulieu Avenue du G´en´eral Leclerc, 35042 Rennes Cedex, France amorin@irisa.fr Abstract While searching

Paris-Sud XI, Université de

80

EFFECTIVE ORGANIZATION AND VISUALIZATION OF WEB SEARCH Nicolas Bonnel  

E-print Network

EFFECTIVE ORGANIZATION AND VISUALIZATION OF WEB SEARCH RESULTS Nicolas Bonnel IRISA Rennes, France. The 3D metaphor pro- posed here is a city. KEY WORDS Search Results Visualization, 3D Metaphors, Human.cotarmanach@francetelecom.com Annie Morin IRISA Rennes, France amorin@irisa.fr ABSTRACT While searching the web, the user is often

Boyer, Edmond

81

Visual pattern discovery in timed event data  

NASA Astrophysics Data System (ADS)

Business processes have tremendously changed the way large companies conduct their business: The integration of information systems into the workflows of their employees ensures a high service level and thus high customer satisfaction. One core aspect of business process engineering are events that steer the workflows and trigger internal processes. Strict requirements on interval-scaled temporal patterns, which are common in time series, are thereby released through the ordinal character of such events. It is this additional degree of freedom that opens unexplored possibilities for visualizing event data. In this paper, we present a flexible and novel system to find significant events, event clusters and event patterns. Each event is represented as a small rectangle, which is colored according to categorical, ordinal or intervalscaled metadata. Depending on the analysis task, different layout functions are used to highlight either the ordinal character of the data or temporal correlations. The system has built-in features for ordering customers or event groups according to the similarity of their event sequences, temporal gap alignment and stacking of co-occurring events. Two characteristically different case studies dealing with business process events and news articles demonstrate the capabilities of our system to explore event data.

Schaefer, Matthias; Wanner, Franz; Mansmann, Florian; Scheible, Christian; Stennett, Verity; Hasselrot, Anders T.; Keim, Daniel A.

2011-01-01

82

VISUAL INTERFACE FOR THE CONCEPT DISTRIBUTION ANALYSIS IN VIDEO SEARCH RESULTS  

E-print Network

VISUAL INTERFACE FOR THE CONCEPT DISTRIBUTION ANALYSIS IN VIDEO SEARCH RESULTS Multi.simaclejeune@litii.com Keywords: Information Retrieval, Visualization, Search Results, Visual Interface. Abstract: Video media. Despite the current performance of the 'traditional' search engines, the video search engines

Paris-Sud XI, Université de

83

Pinwheel stability, pattern selection and the geometry of visual space  

Microsoft Academic Search

It has been proposed that the dynamical stability of topological defects in the visual cortex reflects the Euclidean symmetry of the visual world. We analyze defect stability and pattern selection in a generalized Swift-Hohenberg model of visual cortical development symmetric under the Euclidean group E(2). Euclidean symmetry strongly influences the geometry and multistability of model solutions but does not directly

Michael Schnabel; Matthias Kaschube; Fred Wolf

2008-01-01

84

Visualization Design Patterns for Ultra-Resolution Display Environments  

E-print Network

Visualization Design Patterns for Ultra-Resolution Display Environments Khairi Reda, Jillian Aurisano, Alessandro Febretti, Jason Leigh, and Andrew E. Johnson Electronic Visualization Laboratory University of Illinois at Chicago Abstract--The past 10 years have seen great advances in visualization

Johnson, Andrew

85

Visual search is slowed when visuospatial working memory is occupied  

Microsoft Academic Search

Visual working memory plays a central role in most models of visual search. However, a recent study showed that search efficiency\\u000a was not impaired when working memory was filled to capacity by a concurrent object memory task (Woodman, Vogel, & Luck, 2001).\\u000a Objects and locations may be stored in separate working memory subsystems, and it is plausible that visual search

Geoffrey F. Woodman; Steven J. Luck

2004-01-01

86

Searching for camouflaged targets: Effects of target-background similarity on visual search  

E-print Network

Searching for camouflaged targets: Effects of target-background similarity on visual search Mark B varying set size and target-background similarity (TBS) conditions. Manual errors and RTs increased-based attention; Eye movements; Guided search 1. Introduction Visual search, our ability to detect a target among

Zelinsky, Greg

87

Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children  

ERIC Educational Resources Information Center

Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

Vales, Catarina; Smith, Linda B.

2015-01-01

88

Visualizing Temporal Patterns in Large Multivariate Data using Textual Pattern Matching  

E-print Network

climate modeling research that aims at under- standing long-term climate change and combustion researchVisualizing Temporal Patterns in Large Multivariate Data using Textual Pattern Matching Markus, IEEE Abstract-- Extracting and visualizing temporal patterns in large scientific data is an open

Tennessee, University of

89

Supporting Web Searching of Business Intelligence with Information Visualization  

Microsoft Academic Search

In this research, we proposed and validated an approach to using information visualization to augment search engines in supporting the analysis of business stakeholder information on the Web. We report in this paper findings from a preliminary evaluation comparing a visualization prototype with a traditional method of stakeholder analysis (Web browsing and searching). We found that the prototype achieved a

Wingyan Chung; Ada Leung

2007-01-01

90

Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing  

ERIC Educational Resources Information Center

Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

Brady, Timothy F.; Chun, Marvin M.

2007-01-01

91

The Time Course of Similarity Effects in Visual Search  

ERIC Educational Resources Information Center

It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks…

Guest, Duncan; Lamberts, Koen

2011-01-01

92

Global Statistical Learning in a Visual Search Task  

ERIC Educational Resources Information Center

Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

Jones, John L.; Kaschak, Michael P.

2012-01-01

93

Eye Movements Reveal How Task Difficulty Moulds Visual Search  

ERIC Educational Resources Information Center

In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

Young, Angela H.; Hulleman, Johan

2013-01-01

94

Refixation frequency and memory mechanisms in visual search  

Microsoft Academic Search

Visual search—looking for a target object in the presence of a number of distractor items—is an everyday activity for humans (for example, finding the car in a busy car park) and animals (for example, foraging for food). Our understanding of visual search has been enriched by an interdisciplinary effort using a wide range of research techniques including behavioural studies in

Iain D. Gilchrist; Monika Harvey

2000-01-01

95

Visual similarity is stronger than semantic similarity in guiding visual search for numbers.  

PubMed

Using a visual search task, we explored how behavior is influenced by both visual and semantic information. We recorded participants' eye movements as they searched for a single target number in a search array of single-digit numbers (0-9). We examined the probability of fixating the various distractors as a function of two key dimensions: the visual similarity between the target and each distractor, and the semantic similarity (i.e., the numerical distance) between the target and each distractor. Visual similarity estimates were obtained using multidimensional scaling based on the independent observer similarity ratings. A linear mixed-effects model demonstrated that both visual and semantic similarity influenced the probability that distractors would be fixated. However, the visual similarity effect was substantially larger than the semantic similarity effect. We close by discussing the potential value of using this novel methodological approach and the implications for both simple and complex visual search displays. PMID:24347113

Godwin, Hayward J; Hout, Michael C; Menneer, Tamaryn

2014-06-01

96

Visual Similarity Effects in Categorical Search Robert G. Alexander1  

E-print Network

Visual Similarity Effects in Categorical Search Robert G. Alexander1 (rgalexander unknown. We asked how visual similarity relationships between random-category distractors and two target to collect visual similarity rankings between these target classes and random objects, from which we created

Zelinsky, Greg

97

Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia  

ERIC Educational Resources Information Center

The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

2012-01-01

98

Test of three visual search and detection models  

Microsoft Academic Search

Advance knowledge of the time required by an observer to detect a target visually is of interest, e.g., in preparing flight scenarios, in modeling mission performance, in evaluating camouflage effectiveness, and in visual-scene generator calibration. A wide range of computational models has therefore been developed to predict human visual search and detection performance. This study is performed to test the

Alexander Toet; Piet Bijl; J. Mathieu Valeton

2000-01-01

99

Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation  

E-print Network

Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object; spatial attention; object attention; saliency map; visual search; scene perception; scene memory; implicit humans use target-predictive contextual information to facilitate visual search? How are consistently

Spence, Harlan Ernest

100

Intertrial Temporal Contextual Cuing: Association Across Successive Visual Search Trials Guides Spatial Attention  

E-print Network

Intertrial Temporal Contextual Cuing: Association Across Successive Visual Search Trials Guides Hiroshima University Contextual cuing refers to the facilitation of performance in visual search due trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants

Jiang, Yuhong

101

Visual search behaviour during laparoscopic cadaveric procedures  

NASA Astrophysics Data System (ADS)

Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

2014-03-01

102

Universality in visual cortical pattern formation F. Wolf *, T. Geisel  

E-print Network

on the generation and motion of pinwheels, in the two dimensional pattern of visual cortical orientation columns. Ã? of a paradigmatic process in brain develop- ment: the formation of so called orientation pinwheels

Timme, Marc

103

Horizontal visual search in a large field by patients with unilateral spatial neglect.  

PubMed

In this study, we investigated the horizontal visual search ability and pattern of horizontal visual search in a large space performed by patients with unilateral spatial neglect (USN). Subjects included nine patients with right hemisphere damage caused by cerebrovascular disease showing left USN, nine patients with right hemisphere damage but no USN, and six healthy individuals with no history of brain damage who were age-matched to the groups with brain right hemisphere damage. The number of visual search tasks accomplished was recorded in the first experiment. Neck rotation angle was continuously measured during the task and quantitative data of the measurements were collected. There was a strong correlation between the number of visual search tasks accomplished and the total Behavioral Inattention Test Conventional Subtest (BITC) score in subjects with right hemisphere damage. In both USN and control groups, the head position during the visual search task showed a balanced bell-shaped distribution from the central point on the field to the left and right sides. Our results indicate that compensatory strategies, including cervical rotation, may improve visual search capability and achieve balance on the neglected side. PMID:23632293

Nakatani, Ken; Notoya, Masako; Sunahara, Nobuyuki; Takahashi, Shusuke; Inoue, Katsumi

2013-06-01

104

Global image dissimilarity in macaque inferotemporal cortex predicts human visual search efficiency.  

PubMed

Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity--the degree of overlap between the coarse footprints of a pair of images--largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

Sripati, Arun P; Olson, Carl R

2010-01-27

105

Online Multiple Kernel Similarity Learning for Visual Search.  

PubMed

Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly. PMID:23959603

Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

2013-08-13

106

Online multiple kernel similarity learning for visual search.  

PubMed

Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

2014-03-01

107

Visual pattern encoding with weighted hermite polynomials.  

PubMed

The human visual system is spatially inhomogenous, and this property should be included in models of visual processing. We used weighted Hermite polynomials (WHPs) to encode and to characterize such inhomogenous processing. Simulations using an order-transfer-function (OTF) defined for each WHP order, at three spatial scales, provide elegant predictions of spatial frequency discrimination, of WHP order discrimination, and variations in two-point resolution and detection sensitivity with retinal eccentricity. PMID:11817742

Yang, J; Reeves, A

2001-01-01

108

The Persistent Visual Store as the Locus of Fixation Memory in Visual Search Tasks David Kieras (kieras@umich.edu)  

E-print Network

The Persistent Visual Store as the Locus of Fixation Memory in Visual Search Tasks David Kieras of fixation memory in visual search tasks. In A. Howes, D. Peebles, R. Cooper (Eds.), 9th International Conference on Cognitive Modeling ­ ICCM2009, Manchester, UK. Abstract Experiments on visual search have

Kieras, David E.

109

Reinforcing saccadic amplitude variability in a visual search task.  

PubMed

Human observers often adopt rigid scanning strategies in visual search tasks, even though this may lead to suboptimal performance. Here we ask whether specific levels of saccadic amplitude variability may be induced in a visual search task using reinforcement learning. We designed a new gaze-contingent visual foraging task in which finding a target among distractors was made contingent upon specific saccadic amplitudes. When saccades of rare amplitudes led to displaying the target, the U values (measuring uncertainty) increased by 54.89% on average. They decreased by 41.21% when reinforcing frequent amplitudes. In a noncontingent control group no consistent change in variability occurred. A second experiment revealed that this learning transferred to conventional visual search trials. These results provide experimental support for the importance of reinforcement learning for saccadic amplitude variability in visual search. PMID:25413626

Paeye, Céline; Madelain, Laurent

2014-01-01

110

Visual search in a forced-choice paradigm  

NASA Technical Reports Server (NTRS)

The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

Holmgren, J. E.

1974-01-01

111

Searching for intellectual turning points: Progressive knowledge domain visualization  

PubMed Central

This article introduces a previously undescribed method progressively visualizing the evolution of a knowledge domain's cocitation network. The method first derives a sequence of cocitation networks from a series of equal-length time interval slices. These time-registered networks are merged and visualized in a panoramic view in such a way that intellectually significant articles can be identified based on their visually salient features. The method is applied to a cocitation study of the superstring field in theoretical physics. The study focuses on the search of articles that triggered two superstring revolutions. Visually salient nodes in the panoramic view are identified, and the nature of their intellectual contributions is validated by leading scientists in the field. The analysis has demonstrated that a search for intellectual turning points can be narrowed down to visually salient nodes in the visualized network. The method provides a promising way to simplify otherwise cognitively demanding tasks to a search for landmarks, pivots, and hubs. PMID:14724295

Chen, Chaomei

2004-01-01

112

Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit  

PubMed Central

While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

2009-01-01

113

Reward and Attentional Control in Visual Search  

PubMed Central

It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction—even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity. PMID:23437631

Anderson, Brian A.; Wampler, Emma K.; Laurent, Patryk A.

2015-01-01

114

MIME: A Framework for Interactive Visual Pattern Mining  

E-print Network

MIME: A Framework for Interactive Visual Pattern Mining Bart Goethals, Sandy Moens, and Jilles, using a toolbox consisting of interestingness measures, mining algorithms and post-processing algorithms to assist in identifying interesting patterns. By mining interactively, we enable the user to combine

Antwerpen, Universiteit

115

The development of organized visual search Adam J. Woods a,c,  

E-print Network

The development of organized visual search Adam J. Woods a,c, , Tilbe Göksun a,c , Anjan Chatterjee: 2346 2340 2820 Keywords: Visual search Search organization Executive function Normal development Search orientation Conjunction search Visual search plays an important role in guiding behavior. Children have more

Chatterjee, Anjan

116

Priming and the guidance by visual and categorical templates in visual search  

PubMed Central

Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance. PMID:24605105

Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N. L.

2014-01-01

117

Asynchronous parallel pattern search for nonlinear optimization  

SciTech Connect

Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

P. D. Hough; T. G. Kolda; V. J. Torczon

2000-01-01

118

Aurally aided visual search performance in a dynamic environment  

NASA Astrophysics Data System (ADS)

Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

2008-04-01

119

Learning to see: patterned visual activity and the development of visual  

E-print Network

world by dynamically adapting neural circuit function to ongoing changes in brain circuitry and sen expression to network function, ultimately leading to system-wide functional adaptations. Patterned visual system can adapt rapidly to changes in the visual environment to maintain stable function [13­15]. Some

Ruthazer, Edward

120

Visual search for a target changing in synchrony with an auditory signal  

E-print Network

Visual search for a target changing in synchrony with an auditory signal Waka Fujisaki1, , Ansgar a visual search paradigm. We found that detection of a visual target that changed in synchrony matching process. Keywords: cross modal perception; visual search; audio­visual synchrony 1. INTRODUCTION

Johnston, Alan

121

The effects of task difficulty on visual search strategy in virtual 3D displays  

E-print Network

The effects of task difficulty on visual search strategy in virtual 3D displays Marc Pomplun our choice of visual search strategy may shed light on visual behavior in everyday situations task to study visual search strategies in stereoscopic search displays with virtual depth induced

Carrasco, Marisa

122

Image pattern recognition supporting interactive analysis and graphical visualization  

NASA Technical Reports Server (NTRS)

Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

Coggins, James M.

1992-01-01

123

Searching for intellectual turning points: Progressive knowledge domain visualization  

E-print Network

for landmarks, pivots, and hubs. The primary goal of knowledge domain visualization (KDViz) is to detect with significant contributions as a domain advances. Many aspects of a scientific field can be representedColloquium Searching for intellectual turning points: Progressive knowledge domain visualization

Indiana University

124

Visual Search by Children with and without ADHD  

ERIC Educational Resources Information Center

Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

Mullane, Jennifer C.; Klein, Raymond M.

2008-01-01

125

Conjunctive Visual Search in Individuals with and without Mental Retardation  

ERIC Educational Resources Information Center

A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

Carlin, Michael; Chrysler, Christina; Sullivan, Kate

2007-01-01

126

Sequential pattern data mining and visualization  

DOEpatents

One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

Wong, Pak Chung (Richland, WA); Jurrus, Elizabeth R. (Kennewick, WA); Cowley, Wendy E. (Benton City, WA); Foote, Harlan P. (Richland, WA); Thomas, James J. (Richland, WA)

2009-05-26

127

Sequential pattern data mining and visualization  

DOEpatents

One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

Wong, Pak Chung (Richland, WA); Jurrus, Elizabeth R. (Kennewick, WA); Cowley, Wendy E. (Benton City, WA); Foote, Harlan P. (Richland, WA); Thomas, James J. (Richland, WA)

2011-12-06

128

Visual similarity effects in categorical search Department of Psychology, Stony Brook University, USARobert G. Alexander  

E-print Network

Visual similarity effects in categorical search Department of Psychology, Stony Brook University Science, Stony Brook University, USAGregory J. Zelinsky We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web

Zelinsky, Greg

129

Finding what is new in hybrid visual and memory search: a new search asymmetry. Corbin Cunningham1  

E-print Network

Finding what is new in hybrid visual and memory search: a new search asymmetry. Corbin Cunningham1 , Jeremy Wolfe1,2 1 Brigham and Women's Hospital 2 Harvard Medical School In many visual search tasks, we cafeteria?). Such tasks are "hybrid" visual and memory searches. Wolfe (2012) found that RTs in hybrid tasks

130

Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World  

ERIC Educational Resources Information Center

In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

Kunar, Melina A.; Watson, Derrick G.

2011-01-01

131

Bottom-Up Guidance in Visual Search for Conjunctions  

ERIC Educational Resources Information Center

Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and…

Proulx, Michael J.

2007-01-01

132

Changing Perspective: Zooming in and out during Visual Search  

ERIC Educational Resources Information Center

Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

2013-01-01

133

The Journal of Neuroscience, February 1994, 14(Z): 554567 Visual Search among Items of Different Salience: Removal of Visual  

E-print Network

The Journal of Neuroscience, February 1994, 14(Z): 554567 Visual Search among Items of Different search for the most salient or the least salient item in a display are different kinds of visual tasks salient item. As a result, the two types of visual search presented comparable perceptual difficulty

Braun, Jochen

134

An initial search for visual overshadowing.  

PubMed

A consistent, albeit fragile, finding over the last couple of decades has been that verbalization of hard-to-verbalize stimuli, such as faces, interferes with subsequent recognition of the described target stimulus. We sought to elicit a similar phenomenon whereby visualization interferes with verbal recognition--that is, visual overshadowing. We randomly assigned participants (n?=?180) to either concrete (easy to visualize) or abstract (difficult to visualize) sentence conditions. Following presentation, participants were asked to verbalize the sentence, visualize the sentence, or work on a filler task. As predicted, visualization of an abstract verbal stimulus resulted in significantly lower recognition accuracy; unexpectedly, however, so did verbalization. The findings are discussed within the framework of fuzzy-trace theory. PMID:22502741

Harris, Kevin R; Paul, Stephen T; Adams-Price, Carolyn E

2012-01-01

135

The Serial Process in Visual Search  

ERIC Educational Resources Information Center

The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

2010-01-01

136

Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children  

ERIC Educational Resources Information Center

We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

2006-01-01

137

V E R I TAS IN VISUAL SEARCH, DO AVERAGE FEATURES OF A SCENE GUIDE ATTENTION?IN VISUAL SEARCH, DO AVERAGE FEATURES OF A SCENE GUIDE ATTENTION?  

E-print Network

V E R I TAS IN VISUAL SEARCH, DO AVERAGE FEATURES OF A SCENE GUIDE ATTENTION?IN VISUAL SEARCH, DO Brigham and Women's Hospital 2 Harvard Medical school INTRODUCTION CONTEXTUAL CUEING IN VISUAL SEARCH: RTs the background and the instructions are made explicit 1) Average features of a scene lead to modestly faster RTs

138

Recognizing patterns of visual field loss using unsupervised machine learning  

PubMed Central

Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

2014-01-01

139

How priming in visual search affects response time distributions: Analyses with ex-Gaussian fits.  

PubMed

Although response times (RTs) are the dependent measure of choice in the majority of studies of visual attention, changes in RTs can be hard to interpret. First, they are inherently ambiguous, since they may reflect a change in the central tendency or skew (or both) of a distribution. Second, RT measures may lack sensitivity, since meaningful changes in RT patterns may not be picked up if they reflect two or more processes having opposing influences on mean RTs. Here we describe RT distributions for repetition priming in visual search, fitting ex-Gaussian functions to RT distributions. We focus here on feature and conjunction search tasks, since priming effects in these tasks are often thought to reflect similar mechanisms. As expected, both tasks resulted in strong priming effects when target and distractor identities repeated, but a large difference between feature and conjunction search was also seen, in that the ? parameter (reflecting the standard deviation of the Gaussian component) was far more affected by search repetition in conjunction than in feature search. Although caution should clearly be used when particular parameter estimates are matched to specific functions or processes, our results suggest that analyses of RT distributions can inform theoretical accounts of priming in visual search tasks, in this case showing quite different repetition effects for the two differing search types, suggesting that priming in the two paradigms partly reflects different mechanisms. PMID:25073610

Kristjánsson, Arni; Jóhannesson, Omar I

2014-11-01

140

Flow pattern visualization of a simulated digester.  

PubMed

Mixing patterns inside a simulated flat bottom digester were imaged using the non-invasive techniques of computer automated radioactive particle tracking (CARPT) and computed tomography (CT). Mixing/agitation was provided using gas (air) recirculation at three different flow rates (Q(g)) of 28.32, 56.64 and 84.96 l/h, corresponding to superficial gas velocities of 0.025, 0.05 and 0.075 cm/s, respectively. Better mixing was observed in the upper zone near the top of the draft tube. However, at the bottom of the digester there was a total stagnancy at all the three gas flow rates. The maximum value of the time-averaged axial velocity inside the draft tube, at a gas flow rate of 84.96 l/h, was observed as 34.4 cm/s. The turbulent kinetic energy was observed to be maximum (724 dyn/cm(2)) inside the draft tube, and decreases radially toward the wall of the digester. The present study showed that the CARPT and CT techniques could be successfully used to identify the flow pattern in the digester and to calculate velocity and turbulence parameters quantitatively. On the other hand, the increase in gas circulation rate from 28.32 to 84.96 l/h did not significantly reduce the dead zones inside the flat bottom digester. To achieve the desired mixing and reactor performance, the operating conditions and reactor configuration need to be optimized. PMID:15350417

Karim, Khursheed; Varma, Rajneesh; Vesvikar, Mehul; al-Dahhan, M H

2004-10-01

141

An Energy Effective SIMD Accelerator for Visual Pattern Matching  

E-print Network

An Energy Effective SIMD Accelerator for Visual Pattern Matching Calin Bira, Liviu Gugu, Radu based on a Single Instruction Multiple Data (SIMD) accelerator architecture. The proposed archi- tecture vectors. It consists of an array of efficient processing elements (PEs) which are fed instructions

Kuzmanov, Georgi

142

Fatigue and Structural Change: Two Consequences of Visual Pattern Adaptation  

E-print Network

Fatigue and Structural Change: Two Consequences of Visual Pattern Adaptation Jeremy M. Wolfe-term fatigue, produced very quickly and (2) long-term structural change, requiring more extended adaptation reductions in the sensitivity of the mechanisms detecting the stimulus. Adaptation fatigues the mechanism

143

Friday, January 11, 2002 review Page: 1 http://search.bwh.harvard.edu/RECENT%20PROJECTS/visual_search_review/Review.html  

E-print Network

Friday, January 11, 2002 review Page: 1 http://search.bwh.harvard.edu/RECENT%20PROJECTS/visual_search_review/Review.html Visual Search Jeremy M Wolfe Originally Published in Attention, H. Pashler (Ed.), London, UK: University of visual stimuli What defines a basic feature in visual search? Basic Features in Visual Search Color

144

Impact of Simulated Central Scotomas on Visual Search in Natural Scenes  

PubMed Central

Purpose In performing search tasks, the visual system encodes information across the visual field at a resolution inversely related to eccentricity and deploys saccades to place visually interesting targets upon the fovea where resolution is highest. The serial process of fixation, punctuated by saccadic eye movements, continues until the desired target has been located. Loss of central vision restricts the ability to resolve the high spatial information of a target, interfering with this visual search process. We investigate oculomotor adaptations to central visual field loss with gaze-contingent artificial scotomas. Methods Spatial distortions were placed at random locations in 25° square natural scenes. Gaze-contingent artificial central scotomas were updated at the screen rate (75Hz) based on a 250Hz eyetracker. Eight subjects searched the natural scene for the spatial distortion and indicated its location using a mouse-controlled cursor. Results As the central scotoma size increased, the mean search time increased [F(3,28)= 5.27, p= .05] and the spatial distribution of gaze points during fixation increased significantly along the x [F(3,28)= 6.33, p= .002] and y [F(3,28)= 3.32, p= .034] axes. Oculomotor patterns of fixation duration, saccade size and saccade duration did not change significantly, regardless of scotoma size. Conclusions There is limited automatic adaptation of the oculomotor system following simulated central vision loss. PMID:22885785

McIlreavy, Lee; Fiser, Jozsef; Bex, Peter J.

2012-01-01

145

A working memory account of refixations in visual search.  

PubMed

We tested the hypothesis that active exploration of the visual environment is mediated not only by visual attention but also by visual working memory (VWM) by examining performance in both a visual search and a change detection task. Subjects rarely fixated previously examined distracters during visual search, suggesting that they successfully retained those items. Change detection accuracy decreased with increasing set size, suggesting that subjects had a limited VWM capacity. Crucially, performance in the change detection task predicted visual search efficiency: Higher VWM capacity was associated with faster and more accurate responses as well as lower probabilities of refixation. We found no temporal delay for return saccades, suggesting that active vision is primarily mediated by VWM rather than by a separate attentional disengagement mechanism commonly associated with the inhibition-of-return (IOR) effect. Taken together with evidence that visual attention, VWM, and the oculomotor system involve overlapping neural networks, these data suggest that there exists a general capacity for cognitive processing. PMID:25527149

Shen, Kelly; McIntosh, Anthony R; Ryan, Jennifer D

2014-01-01

146

Towards Accurate and Practical Predictive Models of Active-Vision-Based Visual Search  

E-print Network

Towards Accurate and Practical Predictive Models of Active-Vision-Based Visual Search David E such as icon search by incorporating an "active vision" approach which emphasizes eye movements to visual of a classic visual search task demonstrates the value of incorporating visual acuity functions into models

Hornof, Anthony

147

Textual Difference Visualization of Multiple Search Results utilizing Detail in Context  

E-print Network

Textual Difference Visualization of Multiple Search Results utilizing Detail in Context Edward comparison visualization and describe a prototype search-engine similarity tool (SES), which visualizes the textual difference of multiple web searches using a combination of multiple views and visual bracketing

Kent, University of

148

Top-down search strategies determine attentional capture in visual search: behavioral and electrophysiological evidence.  

PubMed

To investigate how attentional capture in visual search is affected by generalized top-down search strategies, ERPs and behavioral performance were measured in two experiments where spatially nonpredictive color singleton cues preceded visual search arrays that contained one of two equally likely color singletons. When both singletons served as targets, irrelevant-color singleton cues produced behavioral attentional capture effects and elicited an N2pc component, indicative of a singleton search mode. When responses were required to only one of the two color singletons, the same cues no longer elicited behavioral spatial cuing effects, and the N2pc to these cues was attenuated and delayed, in line with the hypothesis that search was now guided by a feature-specific search strategy. Results demonstrate that the ability of visual singleton stimuli to capture attention is not simply determined by their bottom-up salience, but strongly modulated by top-down task sets. PMID:20436192

Eimer, Martin; Kiss, Monika

2010-05-01

149

Spacing affects some but not all visual searches: Implications for theories of attention and crowding  

E-print Network

Spacing affects some but not all visual searches: Implications for theories of attention of crowding. The observed spacing effect in visual search suggests that for certain tasks, serial search may the possible relations between this spacing effect in visual search and other forms of crowding. Keywords

VanRullen, Rufin

150

Perceptual load corresponds with factors known to influence visual search  

PubMed Central

One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258

Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.

2014-01-01

151

Appears in ASIST 2004, November 13-18, 2004, Providence, RI, USA. Visual Search Editor for Composing Meta Searches  

E-print Network

Appears in ASIST 2004, November 13-18, 2004, Providence, RI, USA. Visual Search Editor: aspoerri@scils.rutgers.edu. MetaCrystal is visual tool for creating and editing meta search queries. Users can visually combine the top results retrieved by different search engines to create crystals

Spoerri, Anselm

152

The effect of a visual indicator on rate of visual search Evidence for processing control  

NASA Technical Reports Server (NTRS)

Search rates were estimated from response latencies in a visual search task of the type used by Atkinson et al. (1969), in which a subject searches a small set of letters to determine the presence or absence of a predesignated target. Half of the visual displays contained a marker above one of the letters. The marked letter was the only one that had to be checked to determine whether or not the display contained the target. The presence of a marker in a display significantly increased the estimated rate of search, but the data clearly indicated that subjects did not restrict processing to the marked item. Letters in the vicinity of the marker were also processed. These results were interpreted as showing that subjects are able to exercise some degree of control over the search process in this type of task.

Holmgren, J. E.

1974-01-01

153

Oculomotor correlates of context-guided learning in visual search  

Microsoft Academic Search

Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study,\\u000a we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically,\\u000a we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed\\u000a that a decrease in the number of

Yuan-Chi Tseng; Chiang-Shan Ray Li

2004-01-01

154

Visual Search and the Collapse of Categorization  

ERIC Educational Resources Information Center

Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

2005-01-01

155

Visual Exploratory Search of Relationship Graphs on Smartphones  

PubMed Central

This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

2013-01-01

156

IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Foveated Visual Search for Corners  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Foveated Visual Search for Corners Thomas Arnow, Member search process. We develop principles of foveated visual search and automated fixation selection to accomplish the corner search, supplying a case study of both foveated search and foveated feature detection

Texas at Austin, University of

157

Crowded visual search in children with normal vision and children with visual impairment.  

PubMed

This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search. PMID:24456806

Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke

2014-03-01

158

Pattern Visual Evoked Potentials Elicited by Organic Electroluminescence Screen  

PubMed Central

Purpose. To determine whether organic electroluminescence (OLED) screens can be used as visual stimulators to elicit pattern-reversal visual evoked potentials (p-VEPs). Method. Checkerboard patterns were generated on a conventional cathode-ray tube (S710, Compaq Computer Co., USA) screen and on an OLED (17 inches, 320 × 230?mm, PVM-1741, Sony, Tokyo, Japan) screen. The time course of the luminance changes of each monitor was measured with a photodiode. The p-VEPs elicited by these two screens were recorded from 15 eyes of 9 healthy volunteers (22.0 ± 0.8 years). Results. The OLED screen had a constant time delay from the onset of the trigger signal to the start of the luminescence change. The delay during the reversal phase from black to white for the pattern was 1.0?msec on the cathode-ray tube (CRT) screen and 0.5?msec on the OLED screen. No significant differences in the amplitudes of P100 and the implicit times of N75 and P100 were observed in the p-VEPs elicited by the CRT and the OLED screens. Conclusion. The OLED screen can be used as a visual stimulator to elicit p-VEPs; however the time delay and the specific properties in the luminance change must be taken into account. PMID:25197652

Matsumoto, Celso Soiti; Shinoda, Kei; Matsumoto, Harue; Funada, Hideaki; Minoda, Haruka

2014-01-01

159

Bumblebee visual search for multiple learned target types.  

PubMed

Visual search is well studied in human psychology, but we know comparatively little about similar capacities in non-human animals. It is sometimes assumed that animal visual search is restricted to a single target at a time. In bees, for example, this limitation has been evoked to explain flower constancy, the tendency of bees to specialise on a single flower type. Few studies, however, have investigated bee visual search for multiple target types after extended learning and controlling for prior visual experience. We trained colour-naive bumblebees (Bombus terrestris) extensively in separate discrimination tasks to recognise two rewarding colours in interspersed block training sessions. We then tested them with the two colours simultaneously in the presence of distracting colours to examine whether and how quickly they were able to switch between the target colours. We found that bees switched between visual targets quickly and often. The median time taken to switch between targets was shorter than known estimates of how long traces last in bees' working memory, suggesting that their capacity to recall more than one learned target was not restricted by working memory limitations. Following our results, we propose a model of memory and learning that integrates our findings with those of previous studies investigating flower constancy. PMID:23948481

Nityananda, Vivek; Pattrick, Jonathan G

2013-11-15

160

Accurate expectancies diminish perceptual distraction during visual search  

PubMed Central

The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

2014-01-01

161

Operator Choice Modeling for Collaborative UAV Visual Search Tasks  

E-print Network

Operator Choice Modeling for Collaborative UAV Visual Search Tasks Luca F. Bertuccelli Member, IEEE, and Mary L. Cummings Senior Member, IEEE Abstract--Unmanned Aerial Vehicles (UAVs) provide unprece- dented is expected to increase with envisaged future missions of one operator controlling mul- tiple UAVs

Cummings, Mary "Missy"

162

Attention Capacity and Task Difficulty in Visual Search  

ERIC Educational Resources Information Center

When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of…

Huang, Liqiang; Pashler, Harold

2005-01-01

163

Enhancing Visual Search Abilities of People with Intellectual Disabilities  

ERIC Educational Resources Information Center

This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

2009-01-01

164

Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter  

ERIC Educational Resources Information Center

Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

2010-01-01

165

Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search  

ERIC Educational Resources Information Center

In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

2011-01-01

166

Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching  

PubMed Central

Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102

Göschl, Florian; Engel, Andreas K.; Friese, Uwe

2014-01-01

167

PatternHunter: faster and more sensitive homology search  

Microsoft Academic Search

Motivation: Genomics and proteomics studies routinely depend on homology searches based on the strategy of finding short seed matches which are then extended. The exploding genomic data growth presents a dilemma for DNA homology search techniques: increasing seed size decreases sensitivity whereas decreasing seed size slows down computation. Results: We present a new homology search algorithm 'PatternHunter' that uses a

Bin Ma; John Tromp; Ming Li

2002-01-01

168

Looking versus seeing: Strategies alter eye movements during visual search.  

PubMed

Visual search can be made more efficient by adopting a passive cognitive strategy (i.e., letting the target "pop" into mind) rather than by trying to actively guide attention. In the present study, we examined how this strategic benefit is linked to eye movements. Results show that participants using a passive strategy wait longer before beginning to move their eyes and make fewer saccades than do active participants. Moreover, the passive advantage stems from more efficient use of the information in a fixation, rather than from a wider attentional window. Individual difference analyses indicate that strategies also change the way eye movements are related to search success, with a rapid saccade rate predicting success among active participants, and fewer and larger amplitude saccades predicting success among passive participants. A change in mindset, therefore, alters how oculomotor behaviors are harnessed in the service of visual search. PMID:20702875

Watson, Marcus R; Brennan, Allison A; Kingstone, Alan; Enns, James T

2010-08-01

169

Irrelevant objects of expertise compete with faces during visual search  

PubMed Central

Prior work suggests that non-face objects of expertise can interfere with the perception of faces when the two categories are alternately presented, suggesting competition for shared perceptual resources. Here we ask whether task-irrelevant distractors from a category of expertise compete when faces are presented in a standard visual search task. Participants searched for a target (face or sofa) in an array containing both relevant and irrelevant distractors. The number of distractors from the target category (face or sofa) remained constant, while the number of distractors from the irrelevant category (cars) varied. Search slopes, calculated as a function of the number of irrelevant cars, were correlated with car expertise. The effect was not due to car distractors grabbing attention because they did not compete with sofa targets. Objects of expertise interfere with face perception even when they are task irrelevant, visually distinct and separated in space from faces. PMID:21264705

McGugin, Rankin W.; McKeeff, Thomas J.; Tong, Frank; Gauthier, Isabel

2010-01-01

170

Personalized online information search and visualization  

Microsoft Academic Search

BACKGROUND: The rapid growth of online publications such as the Medline and other sources raises the questions how to get the relevant information efficiently. It is important, for a bench scientist, e.g., to monitor related publications constantly. It is also important, for a clinician, e.g., to access the patient records anywhere and anytime. Although time-consuming, this kind of searching procedure

Dongquan Chen; Helmuth F Orthner; Susan M Sell

2005-01-01

171

The Search for an Empirical and Theoretical Foundation for Algorithm Visualization  

E-print Network

The Search for an Empirical and Theoretical Foundation for Algorithm Visualization Christopher D............................................................................................................................................. 3 2 ALGORITHM VISUALIZATION: BACKGROUND AND OVERVIEW.............................................................................................................................................. 18 4.3 ALGORITHM VISUALIZATION IS MORE EFFECTIVE THAN CONVENTIONAL METHODS IN TEACHING ALGORITHMS

Hundhausen, Chris

172

Performance of Parkinson's disease patients on the Visual Search and Attention Test: impairment in single-feature but not dual-feature visual search.  

PubMed

Nondemented patients with Parkinson's disease (PD) and a group of age and education matched controls were administered a modified version of the Visual Search and Attention Test (VSAT). This task measures subjects' speed at localizing letter or symbol targets based on either a single-feature or a dual-feature search. Three indices were derived from the VSAT: (a) the amount of time taken to complete each of the trials (completion time), (b) the number of target items not crossed out (omissions), and (c) the number of nontarget items crossed out (commissions). The results indicated that, in terms of completion time, the PD patients were impaired on the single-feature search conditions but not on the dual-feature search conditions, suggesting that PD patients are impaired in selective attention processes. It was also found that the number of target items omitted by the normal controls on the VSAT varied as a function of the nature of the target (letter or form) and the search requirements (single-feature or dual-feature search), whereas the number of targets omitted by the PD patients was not affected by these factors. Correlational analyses suggested that the three measures derived from the VSAT assessed different components of attentional performance in these patients. Overall, the results of this study suggest that the VSAT can be used to detect subtle attentional impairments in nondemented PD patients, and that the pattern of their impairment on this clinical test is similar to that found on experimental attentional measures. PMID:14590656

Filoteo, J V; Williams, B J; Rilling, L M; Roberts, J V

1997-01-01

173

Eye-Search: A web-based therapy that improves visual search in hemianopia  

PubMed Central

Persisting hemianopia frequently complicates lesions of the posterior cerebral hemispheres, leaving patients impaired on a range of key activities of daily living. Practice-based therapies designed to induce compensatory eye movements can improve hemianopic patients' visual function, but are not readily available. We used a web-based therapy (Eye-Search) that retrains visual search saccades into patients' blind hemifield. A group of 78 suitable hemianopic patients took part. After therapy (800 trials over 11 days), search times into their impaired hemifield improved by an average of 24%. Patients also reported improvements in a subset of visually guided everyday activities, suggesting that Eye-Search therapy affects real-world outcomes. PMID:25642437

Ong, Yean-Hoon; Jacquin-Courtois, Sophie; Gorgoraptis, Nikos; Bays, Paul M; Husain, Masud; Leff, Alexander P

2015-01-01

174

Task Specificity and the Influence of Memory on Visual Search: Comment on V and Wolfe (2012)  

E-print Network

COMMENTARY Task Specificity and the Influence of Memory on Visual Search: Comment on Võ and Wolfe that the application of memory to visual search may be task specific: Previous experience searching for an object the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied

Hollingworth, Andrew

175

Towards a Flexible, Reusable Model for Predicting Eye Movements During Visual Search of Text  

E-print Network

Towards a Flexible, Reusable Model for Predicting Eye Movements During Visual Search of Text Tim and Information Science, 1202 University of Oregon Eugene, OR 97403-1202 USA Abstract Visual search is an integral use to visually search. A cognitive model is evolved in a principled manner based on eye movement data

Hornof, Anthony

176

Exploring the Effects of Group Size and Display Configuration on Visual Search  

E-print Network

Exploring the Effects of Group Size and Display Configuration on Visual Search Clifton Forlines1 University of Toronto Toronto, ON, Canada {dwigdor, ravin@dgp.toronto.edu} ABSTRACT Visual search, visual search is performed not only by individuals, but also by groups ­ a team of doctors may study an x

Balakrishnan, Ravin

177

Perceptual Encoding Efficiency in Visual Search Robert Rauschenberger and Steven Yantis  

E-print Network

Perceptual Encoding Efficiency in Visual Search Robert Rauschenberger and Steven Yantis Johns of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget support for the importance of nontarget encoding efficiency in accounting for visual search performance

Yantis, Steven

178

Explicit verses Implicit: An Analysis of a Multiple Search Result Visualization  

E-print Network

Explicit verses Implicit: An Analysis of a Multiple Search Result Visualization Edward Suvanaphen results. We have developed the prototype Search Engine Similarity (SES) tool which ex- plicitly visualizes visualizing the relationships between mul- tiple searches will let users browse more effectively. Our results

Kent, University of

179

The role of priming in conjunctive visual search A rni Kristjanssona,*, DeLiang Wangb  

E-print Network

The role of priming in conjunctive visual search A´ rni Kristja´nssona,*, DeLiang Wangb , Ken Abstract To assess the role of priming in conjunctive visual search tasks, we systematically varied task. We conclude that the role of priming in visual search is underestimated in current theories

Wang, DeLiang "Leon"

180

A distributed computational model of spatial memory anticipation during a visual search task  

E-print Network

A distributed computational model of spatial memory anticipation during a visual search task J-les-Nancy, France Abstract. Some visual search tasks require the memorization of the lo- cation of stimuli that have of works have already addressed the specific problem of visual search of a target among a set

Paris-Sud XI, Université de

181

Memory for Where, but Not What, Is Used during Visual Search  

ERIC Educational Resources Information Center

Although the role of memory in visual search is debatable, most researchers agree with a limited-capacity model of memory in visual search. The authors demonstrate the role of memory by replicating previous findings showing that visual search is biased away from old items (previously examined items) and toward new items (nonexamined items).…

Beck, Melissa R.; Peterson, Matthew S.; Vomela, Miroslava

2006-01-01

182

Effects of Search Efficiency on Surround Suppression During Visual Selection in Frontal Eye Field  

E-print Network

Effects of Search Efficiency on Surround Suppression During Visual Selection in Frontal Eye Field. Effects of search efficiency on surround suppression during visual selection in frontal eye field. J the target for a saccade during efficient, pop-out visual search through suppression of the representation

Schall, Jeffrey D.

183

Interactive Visualization and Navigation of Web Search Results Revealing Community Structures and Bridges  

E-print Network

Interactive Visualization and Navigation of Web Search Results Revealing Community Structures information. In this paper we present an interactive visualization system for con- tent analysis of web search: Information Visualization, Web Search Results Index Terms: E.1 [DATA STRUCTURES ]: Graphs and networks--; H.2

Paris-Sud XI, Université de

184

Controlling Attention With Noise: The Cue-Combination Model of Visual Search  

E-print Network

Controlling Attention With Noise: The Cue-Combination Model of Visual Search David F. Baldwin of Cognitive Science University of Colorado at Boulder mozer@colorado.edu Abstract Visual search Guided Search to explain how attention can be directed to locations containing task-relevant visual

Mozer, Michael C.

185

Memory for rejected distractors in visual search? Todd S. Horowitz and Jeremy M. Wolfe  

E-print Network

Memory for rejected distractors in visual search? Todd S. Horowitz and Jeremy M. Wolfe Brigham & Women's Hospital and Harvard Medical School, Boston Theories of visual search have generally assumed that visual search is best understood as a series of successive judgements of the momentary probability

186

Adaptive but non-optimal visual search behavior with highlighted displays q  

E-print Network

Adaptive but non-optimal visual search behavior with highlighted displays q Action editor: Andrea performance in visual search tasks. But interface designers cannot always anticipate users' intended targets attend to highlighting less than what an algebraic visual search model of highlighted displays [Fisher, D

Byrne, Mike

187

Use of an Augmented-Vision Device for Visual Search by Patients with Tunnel Vision  

E-print Network

Use of an Augmented-Vision Device for Visual Search by Patients with Tunnel Vision Gang Luo and Eli images over natural vision on visual search performance of patients with tunnel vision. METHODS. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank

Peli, Eli

188

Visualizing Search Results using SQWID D. Scott McCrickard & Colleen M. Kehoe  

E-print Network

Visualizing Search Results using SQWID D. Scott McCrickard & Colleen M. Kehoe Graphics an interactive visualization of the search results, allowing users to see the relevance of the results to different key terms. Keywords: WWW, search, visualization, query, interactive, SQWID 1. Introduction

McCrickard, Scott

189

section: Behavioral/Systems/Cognitive Visual Search Demands Dictate Reliance upon Working  

E-print Network

section: Behavioral/Systems/Cognitive Visual Search Demands Dictate Reliance upon Working Memory Storage abbreviated title: WM and visual search Roy Luria and Edward K. Vogel University of Oregon working memory, Visual search. Acknowledgment: This work was support by a NIH grant 3 R01 MH087214-02S1

Oregon, University of

190

Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search  

ERIC Educational Resources Information Center

Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

2010-01-01

191

Towards Autonomous Object Reconstruction for Visual Search by the Humanoid Robot HRP-2  

E-print Network

Towards Autonomous Object Reconstruction for Visual Search by the Humanoid Robot HRP-2 O. Stasse1.larlus,frederic.jurie}@inrialpes.fr Abstract--This paper deals with the problem of object re- construction for visual search by a humanoid of the visual search behavior has been thoroughly described in Saidi et al. [1]; it assumes a system

Paris-Sud XI, Université de

192

Why is visual search superior in autism spectrum disorder? Robert M. Joseph,1  

E-print Network

PAPER Why is visual search superior in autism spectrum disorder? Robert M. Joseph,1 Brandon Keehn,1 underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We augments their visual search abilities. Analyses of RT x set size functions showed no group differences

193

A Minimal Model for Predicting Visual Search in Human-Computer Interaction  

E-print Network

A Minimal Model for Predicting Visual Search in Human-Computer Interaction Tim Halverson, OR 97403-1202 USA {thalvers, hornof}@cs.uoregon.edu ABSTRACT Visual search is an important part of human-computer interaction. It is critical that we build theory about how people visually search displays in order to better

Hornof, Anthony

194

Searching for Pulsars Using Image Pattern Recognition  

NASA Astrophysics Data System (ADS)

In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The performance of this system can be improved over time as more training data are accumulated. This AI system has been integrated into the PALFA survey pipeline and has discovered six new pulsars to date.

Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

2014-02-01

195

Visualization of oxygen distribution patterns caused by coral and algae  

PubMed Central

Planar optodes were used to visualize oxygen distribution patterns associated with a coral reef associated green algae (Chaetomorpha sp.) and a hermatypic coral (Favia sp.) separately, as standalone organisms, and placed in close proximity mimicking coral-algal interactions. Oxygen patterns were assessed in light and dark conditions and under varying flow regimes. The images show discrete high oxygen concentration regions above the organisms during lighted periods and low oxygen in the dark. Size and orientation of these areas were dependent on flow regime. For corals and algae in close proximity the 2D optodes show areas of extremely low oxygen concentration at the interaction interfaces under both dark (18.4 ± 7.7 µmol O2 L- 1) and daylight (97.9 ± 27.5 µmol O2 L- 1) conditions. These images present the first two-dimensional visualization of oxygen gradients generated by benthic reef algae and corals under varying flow conditions and provide a 2D depiction of previously observed hypoxic zones at coral algae interfaces. This approach allows for visualization of locally confined, distinctive alterations of oxygen concentrations facilitated by benthic organisms and provides compelling evidence for hypoxic conditions at coral-algae interaction zones. PMID:23882443

Smith, Jennifer E.; Abieri, Maria L.; Hatay, Mark; Rohwer, Forest

2013-01-01

196

The nature of the visual environment induces implicit biases during language-mediated visual search.  

PubMed

Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search. PMID:21461784

Huettig, Falk; McQueen, James M

2011-08-01

197

Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?  

Microsoft Academic Search

In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object

Geoffrey F. Woodman; Steven J. Luck

2007-01-01

198

Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes  

PubMed Central

Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers’ expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC’s ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers’ behavioral task, the MVPA analysis of LOC and the other areas’ activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. PMID:23637176

Preston, Tim J.; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P.

2014-01-01

199

Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention  

ERIC Educational Resources Information Center

Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

2005-01-01

200

Flow pattern visualization in a mimic anaerobic digester using CFD.  

PubMed

Three-dimensional steady-state computational fluid dynamics (CFD) simulations were performed in mimic anaerobic digesters to visualize their flow pattern and obtain hydrodynamic parameters. The mixing in the digester was provided by sparging gas at three different flow rates. The gas phase was simulated with air and the liquid phase with water. The CFD results were first evaluated using experimental data obtained by computer automated radioactive particle tracking (CARPT). The simulation results in terms of overall flow pattern, location of circulation cells and stagnant regions, trends of liquid velocity profiles, and volume of dead zones agree reasonably well with the experimental data. CFD simulations were also performed on different digester configurations. The effects of changing draft tube size, clearance, and shape of the tank bottoms were calculated to evaluate the effect of digester design on its flow pattern. Changing the draft tube clearance and height had no influence on the flow pattern or dead regions volume. However, increasing the draft tube diameter or incorporating a conical bottom design helped in reducing the volume of the dead zones as compared to a flat-bottom digester. The simulations showed that the gas flow rate sparged by a single point (0.5 cm diameter) sparger does not have an appreciable effect on the flow pattern of the digesters at the range of gas flow rates used. PMID:15685599

Vesvikar, Mehul S; Al-Dahhan, Muthanna

2005-03-20

201

Time Course of Target Recognition in Visual Search  

PubMed Central

Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation (?170?ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10?ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. PMID:20428512

Kotowicz, Andreas; Rutishauser, Ueli; Koch, Christof

2009-01-01

202

BATSE Gamma-Ray Burst Line Search: IV. Line Candidates from the Visual Search  

E-print Network

We evaluate the significance of the line candidates identified by a visual search of burst spectra from BATSE's Spectroscopy Detectors. None of the candidates satisfy our detection criteria: an F-test probability less than 10^-4 for a feature in one detector and consistency among the detectors which viewed the burst. Most of the candidates are not very significant, and are likely to be fluctuations. Because of the expectation of finding absorption lines, the search was biased towards absorption features. We do not have a quantitative measure of the completeness of the search which would enable a comparison with previous missions. Therefore a more objective computerized search has begun.

D. L. Band; S. Ryder; L. A. Ford; J. L. Matteson; D. M. Palmer; B. J. Teegarden; M. S. Briggs; W. S. Paciesas; G. N. Pendleton; R. D. Preece

1995-09-01

203

Visual search for category sets: Tradeoffs between exploration and memory  

PubMed Central

Limitations of working memory force a reliance on motor exploration to retrieve forgotten features of the visual array. A category search task was devised to study tradeoffs between exploration and memory in the face of significant cognitive and motor demands. The task required search through arrays of hidden, multi-featured objects to find three belonging to the same category. Location contents were revealed briefly by either a: (1) mouseclick, or (2) saccadic eye movement with or without delays between saccade offset and object appearance. As the complexity of the category rule increased, search favored exploration, with more visits and revisits needed to find the set. As motor costs increased (mouseclick search or oculomotor search with delays) search favored reliance on memory. Application of the model of J. Epelboim and P. Suppes (2001) to the revisits produced an estimate of immediate memory span (M) of about 4–6 objects. Variation in estimates of M across category rules suggested that search was also driven by strategies of transforming the category rule into concrete perceptual hypotheses. The results show that tradeoffs between memory and exploration in a cognitively demanding task are determined by continual and effective monitoring of perceptual load, cognitive demand, decision strategies and motor effort. PMID:21421747

Kibbe, Melissa M.; Kowler, Eileen

2012-01-01

204

Reading and Visual Search: A Developmental Study in Normal Children  

PubMed Central

Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

Seassau, Magali; Bucci, Maria-Pia

2013-01-01

205

A new hybrid optimization method for loading pattern search  

SciTech Connect

A new hybrid optimization method in reloading pattern search is presented in this paper, which mix genetic algorithm (GA) with tabu search (TS). The method combines global search of GA and local search of TS reasonably to enhance the search ability and computational efficiency. For verification and illustration of the advantage of this method, the proposed hybrid optimization method has been applied to the reactor reloading optimization calculation of Cartesian and hexagonal geometry core. The numerical results show that the hybrid method works faster and better than GA. (authors)

Tao, Wang [Shanghai Jiao Tong University, Shanghai 200030 (China); Zhongsheng, Xie [Xi'an Jiao Tong University, Xi'an 710049 (China)

2006-07-01

206

A particle swarm pattern search method for bound constrained ...  

E-print Network

Feb 11, 2006 ... Hart has also used evolutionary programming to design evolution- ary pattern search methods ...... Machine and Human Science, pages 39–43. [12] D. E. Finkel. DIRECT ... Journal of Computational. Chemistry, 15:627–632, ...

2006-02-11

207

Aberrant patterns of visual facial information usage in schizophrenia.  

PubMed

Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. PMID:23713505

Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

2013-05-01

208

Dynamic Analysis and Pattern Visualization of Forest Fires  

PubMed Central

This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

Lopes, António M.; Tenreiro Machado, J. A.

2014-01-01

209

Sleep patterns in Parkinson's disease patients with visual hallucinations.  

PubMed

Visual hallucinations (VHs) in Parkinson's disease (PD) can be a frequent and disturbing complication of the disease with 33% of PD patients undergoing long-term treatment experiencing VHs during the course of their illness. One line of evidence that is emerging as a possible risk factor in the occurrence of VHs is the sleep-wake cycle and sleep behavior in patients with PD. This study compared sleep patterns in a group of visually hallucinating Parkinson's patients with a group of nonhallucinating PD patients and an age-matched control group. Nocturnal sleep was assessed by actigraphy and diaries, while daytime sleepiness and function were assessed by a battery of self-rating sleep questionnaires. Compared with the control group both patient groups had more sleep-related problems and significantly altered sleep patterns, as measured by both actigraphy and sleep questionnaires. Patients who hallucinated however slept less than nonhallucinating patients and also had increased awakenings after sleep onset, reduced sleep efficiency, and increased daytime sleepiness. We propose that VHs in some PD patients may be a symptom of poor sleep and prolonged daytime sleepiness, suggesting that arousal may play a role in the genesis of the hallucination phenomenon. PMID:20615061

Barnes, Jim; Connelly, Vince; Wiggs, Luci; Boubert, Laura; Maravic, Ksenija

2010-08-01

210

Tools for visualizing landscape pattern for large geographic areas  

SciTech Connect

Landscape pattern can be modelled on a grid with polygons constructed from cells that share edges. Although this model only allows connections in four directions, programming is convenient because both coordinates and attributes take discrete integer values. A typical raster land-cover data set is a multimegabyte matrix of byte values derived by classification of images or gridding of maps. Each matrix may have thousands of raster polygons (patches), many of them islands inside other larger patches. These data sets have complex topology that can overwhelm vector geographic information systems. The goal is to develop tools to quantify change in the landscape structure in terms of the shape and spatial distribution of patches. Three milestones toward this goal are (1) creating polygon topology on a grid, (2) visualizing patches, and (3) analyzing shape and pattern. An efficient algorithm has been developed to locate patches, measure area and perimeter, and establish patch topology. A powerful visualization system with an extensible programming language is used to write procedures to display images and perform analysis.

Timmins, S.P. [Analysas Corporation, Oak Ridge, TN (United States); Hunsaker, C.T. [Oak Ridge National Lab., TN (United States)

1993-10-01

211

Two forms of scene memory guide visual search: Memory for scene context and memory for the binding  

E-print Network

Two forms of scene memory guide visual search: Memory for scene context and memory for the binding City, IA, USA The role of scene memory in visual search was investigated in a preview-search task. Keywords: Visual search; Visual memory; Scene memory; Eye movements; Attention. Recent research has

Hollingworth, Andrew

212

Patterns of Search: Analyzing and Modeling Web Query Refinement  

Microsoft Academic Search

We discuss the construction of probabilistic models centering on temporal pat- terns of query refinement. Our analyses are derived from a large corpus of Web search queries extracted from server logs recorded by a popular Internet search service. We frame the modeling task in terms of pursuing an understanding of probabilistic relationships among temporal patterns of activity, informational goals, and

Tessa Lau; Eric Horvitz

1998-01-01

213

Visualization of Orbits and Pattern Evocation for the Double Spherical Pendulum  

E-print Network

Visualization of Orbits and Pattern Evocation for the Double Spherical Pendulum Jerrold E. Marsden and the visualization of orbits of the double spherical pendulum. Pattern evocation is a phenomenon where patterns of this theory are demonstrated for the double spherical pendulum. A dierential-algebraic model is created

Marsden, Jerrold

214

Visualization of Orbits and Pattern Evocation for the Double Spherical Pendulum  

E-print Network

Visualization of Orbits and Pattern Evocation for the Double Spherical Pendulum Jerrold E. Marsden pattern evocation and the visualization of orbits of the double spherical pendulum. Pattern evocation or symmetry. Examples of this theory are demonstrated for the double spherical pendulum. A differential

Wendlandt, Jeff

215

How visual edge features influence cuttlefish camouflage patterning Chuan-Chin Chiao a,b,  

E-print Network

How visual edge features influence cuttlefish camouflage patterning Chuan-Chin Chiao a-bodied cuttlefish. Previous studies have shown that cuttlefish body patterns are strongly influenced by visual edges in the substrate. The aim of the pres- ent study was to examine how cuttlefish body patterning is differentially

California at Irvine, University of

216

The Effect of Animated Banner Advertisements on a Visual Search Task  

E-print Network

provides additional visual information in a screen with limited real estate, and can instruct or assistThe Effect of Animated Banner Advertisements on a Visual Search Task Moira Burke and Anthony J are on the screen. A visual search experiment was designed to measure both subjective impression of workload

Hornof, Anthony

217

Case Study: A Combined Visualization Approach for WWW-Search Results Thomas M. Mann, Harald Reiterer  

E-print Network

Case Study: A Combined Visualization Approach for WWW-Search Results Thomas M. Mann, Harald with a local meta web search engine. 1. Introduction The goal of Information Visualization (IV) is to sup- port.Mann@uni-konstanz.de, Harald.Reiterer@uni-konstanz.de Abstract The idea of Information Visualization is to get insights

Reiterer, Harald

218

Visual search performance of patients with vision impairment: effect of JPEG image enhancement  

E-print Network

Visual search performance of patients with vision impairment: effect of JPEG image enhancement Gang & Peli E. Visual search performance of patients with vision impairment: effect of JPEG image enhancement option for patients with vision impairment resulting from loss of visual acuity and or contrast

Peli, Eli

219

LOCAL DENSITY GUIDES VISUAL SEARCH: SPARSE GROUPS ARE FIRST AND FASTER  

E-print Network

LOCAL DENSITY GUIDES VISUAL SEARCH: SPARSE GROUPS ARE FIRST AND FASTER Tim Halverson and Anthony J modeling to investigate the effect of local density on the visual search of structured layouts of words to process words within a consistent visual angle regardless of density, but that they were more likely

Hornof, Anthony

220

Visual motion induces a forward prediction of spatial pattern.  

PubMed

Cortical motion analysis continuously encodes image velocity but might also be used to predict future patterns of sensory input along the motion path. We asked whether this predictive aspect of motion is exploited by the human visual system. Targets can be more easily detected at the leading as compared to the trailing edge of motion [1], but this effect has been attributed to a nonspecific boost in contrast gain at the leading edge, linked to motion-induced shifts in spatial position [1-4]. Here we show that the detectability of a local sinusoidal target presented at the ends of a region containing motion is phase dependent at the leading edge, but not at the trailing edge. These two observations rule out a simple gain control mechanism that modulates contrast energy and passive filtering explanations, respectively. By manipulating the relative orientation of the moving pattern and target, we demonstrate that the resulting spatial variation in detection threshold along the edge closely resembles the superposition of sensory input and an internally generated predicted signal. These findings show that motion induces a forward prediction of spatial pattern that combines with the cortical representation of the future stimulus. PMID:21514158

Roach, Neil W; McGraw, Paul V; Johnston, Alan

2011-05-10

221

Visual search strategies and decision making in baseball batting.  

PubMed

The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

Takeuchi, Takayuki; Inomata, Kimihiro

2009-06-01

222

Test of three visual search and detection models  

NASA Astrophysics Data System (ADS)

Advance knowledge of the time required by an observer to detect a target visually is of interest, e.g., in preparing flight scenarios, in modeling mission performance, in evaluating camouflage effectiveness, and in visual-scene generator calibration. A wide range of computational models has therefore been developed to predict human visual search and detection performance. This study is performed to test the quality of the predictions of three of these models: ORACLE, Visdet, and a formula by Travnikova. The three different models are used to predict the results of an experiment in which observers searched for military vehicles in complex rural scenes. The models predict either the mean time required to find the target, or the probability of finding the target after a given amount of time, from a few physical parameters describing the scene (the mean scene luminance, the angular dimensions of the field of view and the target, the intrinsic target contrast, etc.). None of the models reliably predicts observer performance for most of the scenes used in this study. ORACLe and Visdet both overestimate the detection probability for most situations. The formula by Travnikova does not apply to the scenes used here.

Toet, Alexander; Bijl, Piet; Valeton, J. Mathieu

2000-05-01

223

"Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search  

ERIC Educational Resources Information Center

Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

2013-01-01

224

Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets  

ERIC Educational Resources Information Center

The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those…

Starreveld, Peter A.; Theeuwes, Jan; Mortier, Karen

2004-01-01

225

Exploring the effects of group size and display configuration on visual search  

Microsoft Academic Search

Visual search is the subject of countless psychology studies in which people search for target items within a scene. The bulk of this literature focuses on the individual with the goal of understanding the human perceptual system. In life, visual search is performed not only by individuals, but also by groups - a team of doctors may study an x-ray

Clifton Forlines; Chia Shen; Daniel Wigdor; Ravin Balakrishnan

2006-01-01

226

Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping  

ERIC Educational Resources Information Center

Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual

McDougall, Sine; Tyrer, Victoria; Folkard, Simon

2006-01-01

227

Age mediation of frontoparietal activation during visual feature search.  

PubMed

Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420

Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

2014-11-15

228

Orientation anisotropies in visual search revealed by noise.  

PubMed

The human visual system is remarkably adept at finding objects of interest in cluttered visual environments, a task termed visual search. Because the human eye is highly foveated, it accomplishes this by making many discrete fixations linked by rapid eye movements called saccades. In such naturalistic tasks, we know very little about how the brain selects saccadic targets (the fixation loci). In this paper, we use a novel technique akin to psychophysical reverse correlation and stimuli that emulate the natural visual environment to measure observers' ability to locate a low-contrast target of unknown orientation. We present three main discoveries. First, we provide strong evidence for saccadic selectivity for spatial frequencies close to the target's central frequency. Second, we demonstrate that observers have distinct, idiosyncratic biases to certain orientations in saccadic programming, although there were no priors imposed on the target's orientation. These orientation biases cover a subset of the near-cardinal (horizontal/vertical) and near-oblique orientations, with orientations near vertical being the most common across observers. Further, these idiosyncratic biases were stable across time. Third, within observers, very similar biases exist for foveal target detection accuracy. These results suggest that saccadic targeting is tuned for known stimulus dimensions (here, spatial frequency) and also has some preference or default tuning for uncertain stimulus dimensions (here, orientation). PMID:17997653

Tavassoli, Abtine; van der Linde, Ian; Bovik, Alan C; Cormack, Lawrence K

2007-01-01

229

Adaptation improves performance on a visual search task  

PubMed Central

Temporal context, or adaptation, profoundly affects visual perception. Despite the strength and prevalence of adaptation effects, their functional role in visual processing remains unclear. The effects of spatial context and their functional role are better understood: these effects highlight features that differ from their surroundings and determine stimulus salience. Similarities in the perceptual and physiological effects of spatial and temporal context raise the possibility that they serve similar functions. We therefore tested the possibility that adaptation can enhance stimulus salience. We measured the effects of prolonged (40 s) adaptation to a counterphase grating on performance in a search task in which targets were defined by an orientation offset relative to a background of distracters. We found that, for targets with small orientation offsets, adaptation reduced reaction times and decreased the number of saccades made to find targets. Our results provide evidence that adaptation may function to highlight features that differ from the temporal context in which they are embedded. PMID:23390320

Wissig, Stephanie C.; Patterson, Carlyn A.; Kohn, Adam

2013-01-01

230

Visual search strategies of experienced and nonexperienced swimming coaches.  

PubMed

The aim of this study consists of the application of an experimental protocol that allows information to be obtained about the visual search strategies elaborated by swimming coaches. 16 swimming coaches participated. The Experienced group (n=8) had 16.1 yr. (SD=8.2) of coaching experience and at least five years of experience in underwater vision. The Nonexperienced group in underwater vision (n= 8) had 4.2 yr. (SD= 4.0) of coaching experience. Participants were tested in a laboratory environment using a video-projected sample of the crawl stroke of an elite swimmer. This work discusses the main areas of the swimmer's body used by coaches to identify and analyse errors in technique from overhead and underwater perspectives. In front-underwater videos, body roll and mid-water were the locations of the display with higher percentages of fixation time. In the side-underwater slow videos, the upper body was the location with higher percentages of visual fixation time and was used to detect the low elbow fault. Side-overhead takes were not the best perspectives to pick up information directly about performance of the arms; coaches attended to the head as a reference for their visual search. The observation and technical analysis of the hands and arms were facilitated by an underwater perspective. Visual fixation on the elbow served as a reference to identify errors in the upper body. The side-underwater perspective may be an adequate way to identify correct knee angles in leg kicking and the alignment of a swimmer's body and leg actions. PMID:17326515

Moreno, Francisco J; Saavedra, José M; Sabido, Rafael; Luis, Vicente; Reina, Raúl

2006-12-01

231

Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations  

NASA Astrophysics Data System (ADS)

An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

Bays, Hillary

232

Age-related changes in conjunctive visual search in children with and without ASD.  

PubMed

Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. PMID:24574200

Iarocci, Grace; Armstrong, Kimberly

2014-04-01

233

The Journal of Neuroscience, February 1994, 14(2): 554-567 Visual Search among Items of Different Salience: Removal of Visual  

E-print Network

The Journal of Neuroscience, February 1994, 14(2): 554-567 Visual Search among Items of Different search for the most salient or the least salient item In a display are different kinds of visual tasks salient item. As a result, the two types of visual search presented comparable perceptual difficulty

Koch, Christof

234

Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting.  

PubMed

We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics. PMID:11999481

Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith

2002-03-01

235

Pattern-Reversal Visual-Evoked Potentials in Patients with Hemineglect Syndrome  

Microsoft Academic Search

To investigate basic visual information processing in patients with hemineglect syndrome, pattern reversal visual evoked potentials (VEPs) were recorded in 21 brain-injured patients (10 with neglect symptoms) and 6 healthy subjects. The stimulus was a checkerboard which varied in check size or temporal frequency, presented to the left or right Visual field. VEPs recorded in neglect patients to stimuli presented

M. P. Viggiano; D. Spinelli; L. Mecacci

1995-01-01

236

How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns  

ERIC Educational Resources Information Center

Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

2012-01-01

237

What Can 1 Billion Trials Tell Us About Visual Search?  

PubMed

Mobile technology (e.g., smartphones and tablets) has provided psychologists with a wonderful opportunity: through careful design and implementation, mobile applications can be used to crowd source data collection. By garnering massive amounts of data from a wide variety of individuals, it is possible to explore psychological questions that have, to date, been out of reach. Here we discuss 2 examples of how data from the mobile game Airport Scanner (Kedlin Co., http://www.airportscannergame.com) can be used to address questions about the nature of visual search that pose intractable problems for laboratory-based research. Airport Scanner is a successful mobile game with millions of unique users and billions of individual trials, which allows for examining nuanced visual search questions. The goals of the current Observation Report were to highlight the growing opportunity that mobile technology affords psychological research and to provide an example roadmap of how to successfully collect usable data. (PsycINFO Database Record (c) 2014 APA, all rights reserved). PMID:25485661

Mitroff, Stephen R; Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Winkle, Jonathan; Clark, Kait

2014-12-01

238

Searching for pulsars using image pattern recognition  

E-print Network

In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surv eys using image pattern recognition with deep neural nets---the PICS(Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interferences by looking for patterns from candidate. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of up to thousands pixel of image data. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its $\\sim$9000 neurons. Different from other pulsar selection programs which use pre-designed patterns, the PICS AI teaches itself the salient features of different pulsars from a set of human-labeled candidates through machine learning. The deep neural networks in this AI system grant it superior ability in recognizing various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated wi...

Zhu, W W; Madsen, E C; Tan, M; Stairs, I H; Brazier, A; Lazarus, P; Lynch, R; Scholz, P; Stovall, K; Random, S M; Banaszak, S; Biwer, C M; Cohen, S; Dartez, L P; Flanigan, J; Lunsford, G; Matinez, J G; Mata, A; Rohr, M; Walker, A; Allen, B; Bhat, N D R; Bogdanov, S; Camilo, F; Chatterjee, S; Cordes, J M; Crawford, F; Deneva, J S; Desvignes, G; Ferdman, R D; Hessels, J W T; Jenet, F A; Kaplan, D; Kaspi, V M; Knispel, B; Lee, K J; van Leeuwen, J; Lyne, A G; McLaughlin, M A; Spitler, L G

2014-01-01

239

Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)  

ERIC Educational Resources Information Center

Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

Hollingworth, Andrew

2012-01-01

240

Deciphering mobile search patterns: a study of Yahoo! mobile search queries  

Microsoft Academic Search

In this paper we study the characteristics of search queries submitted from mobile devices using various Yahoo! one- Search applications during a 2 months period in the second half of 2007, and report the query patterns derived from 20 million English sample queries submitted by users in US, Canada, Europe, and Asia. We examine the query distri- bution and topical

Jeonghee Yi; Farzin Maghoul; Jan O. Pedersen

2008-01-01

241

Is a search template an ordinary working memory? Comparing electrophysiological markers of working memory maintenance for visual search and recognition.  

PubMed

Visual search requires the maintenance of a search template in visual working memory in order to guide attention towards the target. This raises the question whether a search template is essentially the same as a visual working memory representation used in tasks that do not require attentional guidance, or whether it is a qualitatively different representation. Two experiments tested this by comparing electrophysiological markers of visual working memory maintenance between simple recognition and search tasks. For both experiments, responses were less rapid and less accurate in search task than in simple recognition. Nevertheless, the contralateral delay activity (CDA), an index of quantity and quality of visual working memory representations, was equal across tasks. On the other hand, the late positive complex (LPC), which is sensitive to the effort invested in visual working memory maintenance, was greater for the search task than the recognition task. Additionally, when the same target cue was repeated across trials (Experiment 2), the amplitude of visual working memory markers (both CDA and LPC) decreased, demonstrating learning of the target at an equal rate for both tasks. Our results suggest that a search template is qualitatively the same as a representation used for simple recognition, but greater effort is invested in its maintenance. PMID:24878275

Gunseli, Eren; Meeter, Martijn; Olivers, Christian N L

2014-07-01

242

A pyramidal neural network for visual pattern recognition.  

PubMed

In this paper, we propose a new neural architecture for classification of visual patterns that is motivated by the two concepts of image pyramids and local receptive fields. The new architecture, called pyramidal neural network (PyraNet), has a hierarchical structure with two types of processing layers: Pyramidal layers and one-dimensional (1-D) layers. In the new network, nonlinear two-dimensional (2-D) neurons are trained to perform both image feature extraction and dimensionality reduction. We present and analyze five training methods for PyraNet [gradient descent (GD), gradient descent with momentum, resilient back-propagation (RPROP), Polak-Ribiere conjugate gradient (CG), and Levenberg-Marquadrt (LM)] and two choices of error functions [mean-square-error (mse) and cross-entropy (CE)]. In this paper, we apply PyraNet to determine gender from a facial image, and compare its performance on the standard facial recognition technology (FERET) database with three classifiers: The convolutional neural network (NN), the k-nearest neighbor (k-NN), and the support vector machine (SVM). PMID:17385623

Phung, Son Lam; Bouzerdoum, Abdesselam

2007-03-01

243

Mouse Visual Neocortex Supports Multiple Stereotyped Patterns of Microcircuit Activity  

PubMed Central

Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701

Sadovsky, Alexander J.

2014-01-01

244

Visual search in noise: Revealing the influence of structural cues by gaze-contingent  

E-print Network

-contingent classification image analysis Department of Electrical and Computer Engineering and Center for Perceptual Systems some aspect of the target in their local image features. Keywords: classification images, visual searchVisual search in noise: Revealing the influence of structural cues by gaze

Field, David

245

Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers  

ERIC Educational Resources Information Center

Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

2011-01-01

246

Is There a Limit to the Superiority of Individuals with ASD in Visual Search?  

ERIC Educational Resources Information Center

Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

2014-01-01

247

Controlling the Focus of Spatial Attention During Visual Search: Effects of Advanced Aging and Alzheimer Disease  

Microsoft Academic Search

It was hypothesized that slowed visual search in healthy adult aging arises from reduced ability to adjust the size of the attentional focus. A novel, cued-visual search task manipulated the scale of spatial attention in a complex field in healthy elderly individuals and patients with dementia of the Alzheimer type (DAT). Precues indicated with varying validity the size and location

Pamela M. Greenwood; Raja Parasuraman; Gene E. Alexander

1997-01-01

248

Efficient Training Of Visual Search Via Attentional Highlighting Michael C. Mozer  

E-print Network

.) However, explicit instruction in complex visual domains is difficult. Ef- forts to design tutoring systemsEfficient Training Of Visual Search Via Attentional Highlighting Michael C. Mozer Institute address: Institute of Cognitive Science, University of Colorado, Boulder CO 80309­0344 keywords: visual

Mozer, Michael C.

249

Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome  

ERIC Educational Resources Information Center

Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

2004-01-01

250

Case role filling as a side effect of visual search  

SciTech Connect

This paper addresses the problem of generating communicatively adequate extended responses in the absence of specific knowledge concerning the intentions of the questioner. The authors formulate and justify a heuristic for the selection of optional deep case slots not contained in the question as candidates for the additional information contained in an extended response. It is shown that, in a visually present domain of discourse, case role filling for the construction of an extended response can be regarded as a side effect of the visual search necessary to answer a question containing a locomotion verb. The paper describes the various representation constructions used in the German language dialog system HAM-ANS for dealing with the semantics of locomotion verbs and illustrates their use in generating extended responses. In particular, it outlines the structure of the geometrical scene description, the representation of events in a logic-oriented semantic representation language, the case-frame lexicon and the representation of the referential semantics based on the flavor system. The emphasis is on a detailed presentation of the application of object-oriented programming methods for coping with the semantics of locomotion verbs. The process of generating an extended response is illustrated by an extensively annotated trace. 13 references.

Marburger, H.; Wahlster, W.

1983-01-01

251

GEON Developments for Searching, Accessing, and Visualizing Distributed Data  

NASA Astrophysics Data System (ADS)

The NSF-funded GEON (Geosciences Network) Information Technology Research project is developing data sharing frameworks, a registry for distributed databases, concept-based search mechanisms, advanced visualization software, and grid-computing resources for earth science and education applications. The goal of this project is to enable new interdisciplinary research in the geosciences, while extending the access to data and complex modeling tools from the hands of a few researchers to a much broader set of scientific and educational users. To facilitate this, the GEON team of IT scientists, geoscientists, and educators and their collaborators are creating a capable Cyberinfrastructure that is based on grid/web services operating in a distributed environment. We are using a best practices approach that is designed to provide useful and usable capabilities and tools. With the realization of new large scale projects such as EarthScope that involve the collection, analysis, and modeling of vast quantities of diverse data, it is increasingly important to be able to effectively handle, model, and integrate a wide range of multi-dimensional, multi-parameter, and time dependent data in a timely fashion. GEON has been developing a process where the user can discover, access, retrieve and visualize data that is hosted either at GEON or at distributed servers. Whenever possible, GEON is using established protocols and formats for data and metadata exchange that are based on community efforts such as OPeNDAP, the Open GIS Consortium, Grid Computing, and digital libraries. This approach is essential to help overcome the challenges of dealing with heterogeneous distributed data and increases the possibility of data interoperability. We give an overview of resources that are now available to access and visualize a variety of geological and geophysical data, derived products and models including GPS data, GPS-derived velocity vectors and strain rates, earthquakes, three-dimensional seismic tomography, geodynamic models, geologic maps and remote sensing imagery.

Meertens, C.; Seber, D.; Baru, C.; Wright, M.

2005-12-01

252

Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.  

PubMed

Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. PMID:24051777

Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

2013-12-01

253

The processing of coherent global form and motion patterns without visual awareness  

PubMed Central

In the present study we addressed whether the processing of global form and motion was dependent on visual awareness. Continuous flash suppression (CFS) was used to suppress from awareness global dot motion (GDM) and Glass pattern stimuli. We quantified the minimum time taken for both pattern types to break suppression with the signal coherence of the pattern (0, 25, 50, and 100% signal) and the type of global structure (rotational, and radial) as independent variables. For both form and motion patterns increasing signal coherence decreased the time required to break suppression. This was the same for both rotational and radial global patterns. However, GDM patterns broke suppression faster than Glass patterns. In a supplementary experiment, we confirmed that this difference in break times is not because of the temporal nature of GDM patterns in attracting attention. In Experiment 2, we examined whether the processing of dynamic Glass patterns were similarly dependent on visual awareness. The processing of dynamic Glass patterns is involves both motion and form systems, and we questioned whether the interaction of these two systems was dependent on visual awareness. The suppression of dynamic Glass patterns was also dependent on signal coherence and the time course of suppression break resembled the detection of global motion and not global form. In Experiment 3 we ruled out the possibility that faster suppression break times was because the visual system is more sensitive to highly coherent form and motion patterns. Here contrast changing GDM and Glass patterns were superimposed on the dynamic CFS mask, and the minimum time required for them to be detected was measured. We showed that there was no difference in detection times for patterns of 0 and 100% coherence. The advantage of highly coherent global motion and form patterns in breaking suppression indicated that the processing and interaction of global motion and form systems occur without visual awareness. PMID:24672494

Chung, Charles Y. L.; Khuu, Sieu K.

2014-01-01

254

Breaking Visual CAPTCHAs with Naive Pattern Recognition Algorithms  

Microsoft Academic Search

Visual CAPTCHAs have been widely used across the Internet to defend against undesirable or malicious bot programs. In this paper, we document how we have broken most such visual schemes provided at Captchaservice.org, a publicly available web service for CAPTCHA generation. These schemes were effectively resistant to attacks conducted using a high-quality Optical Character Recognition program, but were broken with

Jeff Yan; Ahmad Salah El Ahmad

2007-01-01

255

Point of Gaze Analysis Reveals Visual Search Strategies Umesh Rajashekara, Lawrence K. Cormackb and Alan C. Bovika  

E-print Network

Point of Gaze Analysis Reveals Visual Search Strategies Umesh Rajashekara, Lawrence K. Cormackb Seemingly complex tasks like visual search can be analyzed using a cognition-free, bottom-up framework. We sought to reveal strategies used by observers in visual search tasks using accurate eye tracking

Rajashekar, Umesh

256

Appears in ICIV'04, London, England, July 14 -16, 2004 How Visual Query Tools Can Support Users Searching the Internet  

E-print Network

to show why visual query tools may provide greater benefits to users who want to coordinate meta searches no significant impact on the effectiveness of the search results [6]. This suggests that visual query tools have]. While meta search engines exist that visually organize the retrieved documents, [5, 9 10, 13, 18

Spoerri, Anselm

257

VisArchive: A Time and Relevance Based Visual Interface for Searching, Browsing, and Exploring Project Archives (with Timeline  

E-print Network

VisArchive: A Time and Relevance Based Visual Interface for Searching, Browsing, and Exploring Supervisory Committee VisArchive: A Time and Relevance Based Visual Interface for Searching, Browsing users with better awareness of search results within project archives. VisArchive visualizes

Tory, Melanie

258

How fast can you change your mind? The speed of top-down guidance in visual search  

E-print Network

How fast can you change your mind? The speed of top-down guidance in visual search Jeremy M. Wolfe target with each search (e.g. find the coffee cup, then the sugar). How quickly can the visual system by category level. Ã? 2004 Elsevier Ltd. All rights reserved. 1. Introduction Sometimes we search the visual

259

Explaining Eye Movements in the Visual Search of Varying Density Layouts Tim Halverson (thalvers@cs.uoregon.edu)  

E-print Network

Explaining Eye Movements in the Visual Search of Varying Density Layouts Tim Halverson (thalvers of visual search, and the synergistic relationship between cognitive modeling and eye tracking. The paper presents cognitive models of the perceptual, cognitive, and motor processing involved in the visual search

Hornof, Anthony

260

The Area Activation Model of Saccadic Selectivity in Visual Search Marc Pomplun (marc@psych.utoronto.ca)  

E-print Network

The Area Activation Model of Saccadic Selectivity in Visual Search Marc Pomplun (marc selectivity in visual search tasks. The model in its present state includes weights for target the statistical distribution of saccadic endpoints for any given visual search display. Besides providing

Pomplun, Marc

261

The hard-won benefits of familiarity in visual search: naturally familiar brand logos are found faster  

E-print Network

The hard-won benefits of familiarity in visual search: naturally familiar brand logos are found Familiar items are found faster than unfamiliar ones in visual search tasks. This effect has important items with moderate levels of exposure would show benefits in visual search, and if so, what kind

Koutstaal, Wilma

262

Strategies of the honeybee Apis mellifera during visual search for vertical targets presented at various heights: a role for spatial attention?  

PubMed Central

When honeybees are presented with a colour discrimination task, they tend to choose swiftly and accurately when objects are presented in the ventral part of their frontal visual field. In contrast, poor performance is observed when objects appear in the dorsal part. Here we investigate if this asymmetry is caused by fixed search patterns or if bees can use alternative search mechanisms such as spatial attention, which allows flexible focusing on different areas of the visual field. We asked individual honeybees to choose an orange rewarded target among blue distractors. Target and distractors were presented in the ventral visual field, the dorsal field or both. Bees presented with targets in the ventral visual field consistently had the highest search efficiency, with rapid decisions, high accuracy and direct flight paths. In contrast, search performance for dorsally located targets was inaccurate and slow at the beginning of the test phase, but bees increased their search performance significantly after a few learning trials: they found the target faster, made fewer errors and flew in a straight line towards the target. However, bees needed thrice as long to improve the search for a dorsally located target when the target’s position changed randomly between the ventral and the dorsal visual field. We propose that honeybees form expectations of the location of the target’s appearance and adapt their search strategy accordingly. Different possible mechanisms of this behavioural adaptation are discussed. PMID:25254109

Morawetz, Linde; Chittka, Lars; Spaethe, Johannes

2014-01-01

263

Prediction of shot success for basketball free throws: visual search strategy.  

PubMed

In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. PMID:24319995

Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

2014-01-01

264

EVALUATION OF A VISUALLY CATEGORIZED SEARCH ENGINE Berrin Dogusoy, Kursat Cagiltay  

E-print Network

EVALUATION OF A VISUALLY CATEGORIZED SEARCH ENGINE Berrin Dogusoy, Kursat Cagiltay Department and the search engines are becoming an indispensable tool in order to find information in Internet. While web, the intensive workload requires that people should use the time properly. Using search engines effectively turns

Paris-Sud XI, Université de

265

High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search  

ERIC Educational Resources Information Center

Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

2010-01-01

266

Visual Search Is Postponed during the Attentional Blink until the System Is Suitably Reconfigured  

ERIC Educational Resources Information Center

J. S. Joseph, M. M. Chun, and K. Nakayama (1997) found that pop-out visual search was impaired as a function of intertarget lag in an attentional blink (AB) paradigm in which the 1st target was a letter and the 2nd target was a search display. In 4 experiments, the present authors tested the implication that search efficiency should be similarly…

Ghorashi, S. M. Shahab; Smilek, Daniel; Di Lollo, Vincent

2007-01-01

267

Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks  

NASA Astrophysics Data System (ADS)

Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

2014-02-01

268

Exploration on Building of Visualization Platform to Innovate Business Operation Pattern of Supply Chain Finance  

NASA Astrophysics Data System (ADS)

Supply Chain Finance, as a new financing pattern, has been arousing general attentions of scholars at home and abroad since its publication. This paper describes the author's understanding towards supply chain finance, makes classification of its business patterns in China from different perspectives, analyzes the existing problems and deficiencies of the business patterns, and finally puts forward the notion of building a visualization platform to innovate the business operation patterns and risk control modes of domestic supply chain finance.

He, Xiangjun; Tang, Lingyun

269

Disruptive Body Patterning of Cuttlefish (Sepia officinalis) Requires Visual Information Regarding  

E-print Network

Disruptive Body Patterning of Cuttlefish (Sepia officinalis) Requires Visual Information Regarding of Sussex, Brighton, UK Cuttlefish (Sepia officinalis Linnaeus, 1758) on mixed light and dark gravel show of natural substrates that cuttlefish cue on visually are largely unknown. Therefore, we aimed to identify

Hanlon, Roger T.

270

SELECTIVE MECHANISMS FOR COMPLEX VISUAL PATTERNS REVEALED BY ADAPTATION  

E-print Network

to show interactions between early visual chan- nels (Olzak and Thomas, 1991; Georgeson, 1992; Caran- dini et al., 1997a; Georgeson and Meese, 1997). None of these studies has demonstrated the existence

Nottingham, University of

271

Effect of pattern complexity on the visual span for Chinese and alphabet characters  

PubMed Central

The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity—lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

Wang, Hui; He, Xuanzi; Legge, Gordon E.

2014-01-01

272

Visual Search and Line Bisection in Hemianopia: Computational Modelling of Cortical Compensatory Mechanisms and Comparison with Hemineglect  

PubMed Central

Hemianopia patients have lost vision from the contralateral hemifield, but make behavioural adjustments to compensate for this field loss. As a result, their visual performance and behaviour contrast with those of hemineglect patients who fail to attend to objects contralateral to their lesion. These conditions differ in their ocular fixations and perceptual judgments. During visual search, hemianopic patients make more fixations in contralesional space while hemineglect patients make fewer. During line bisection, hemianopic patients fixate the contralesional line segment more and make a small contralesional bisection error, while hemineglect patients make few contralesional fixations and a larger ipsilesional bisection error. Hence, there is an attentional failure for contralesional space in hemineglect but a compensatory adaptation to attend more to the blind side in hemianopia. A challenge for models of visual attentional processes is to show how compensation is achieved in hemianopia, and why such processes are hindered or inaccessible in hemineglect. We used a neurophysiology-derived computational model to examine possible cortical compensatory processes in simulated hemianopia from a V1 lesion and compared results with those obtained with the same processes under conditions of simulated hemineglect from a parietal lesion. A spatial compensatory bias to increase attention contralesionally replicated hemianopic scanning patterns during visual search but not during line bisection. To reproduce the latter required a second process, an extrastriate lateral connectivity facilitating form completion into the blind field: this allowed accurate placement of fixations on contralesional stimuli and reproduced fixation patterns and the contralesional bisection error of hemianopia. Neither of these two cortical compensatory processes was effective in ameliorating the ipsilesional bias in the hemineglect model. Our results replicate normal and pathological patterns of visual scanning, line bisection, and differences between hemianopia and hemineglect, and may explain why compensatory processes that counter the effects of hemianopia are ineffective in hemineglect. PMID:23390506

Lanyon, Linda J.; Barton, Jason J. S.

2013-01-01

273

Timing of speech and display affects the linguistic mediation of visual search.  

PubMed

Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension. PMID:25154286

Chiu, Eric M; Spivey, Michael J

2014-01-01

274

Camouflage by Edge Enhancement in Animal Coloration Patterns and Its Implications for Visual Mechanisms  

Microsoft Academic Search

Animal camouflage patterns may exploit, and thus give an insight into, visual processing mechanisms. In one common type of camouflage the borders of the coloured patterns are enhanced by high contrast lines. This type of camouflage is seen on many frogs and we use it as the basis for speculating about vision in a small, frog-eating snake. It is argued

D. Osorio; M. V. Srinivasan

1991-01-01

275

A modified mirror projection visual evoked potential stimulator for presenting patterns in different orientations.  

PubMed

Modifications to a standard mirror projection visual evoked potential stimulator are described to enable projection of patterns in varying orientations. The galvanometer-mirror assembly is mounted on an arm which can be rotated through 90 degrees. This enables patterns in any orientation to be deflected perpendicular to their axes. PMID:2424725

Taylor, P K; Wynn-Williams, G M

1986-07-01

276

Feature-Based Attention in the Frontal Eye Field and Area V4 during Visual Search  

E-print Network

When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of ...

Zhou, Huihui

277

Eye movement guidance in familiar visual scenes : a role for scene specific location priors in search  

E-print Network

Ecologically relevant search typically requires making rapid and strategic eye movements in complex, cluttered environments. Attention allocation is known to be influenced by low level image features, visual scene context, ...

Hidalgo-Sotelo, Barbara

2010-01-01

278

What are the shapes of response time distributions in visual search?  

E-print Network

Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of ...

Palmer, Evan M.

279

Effectiveness of search patterns for recovery of animal carcasses in relation to pocket gopher infestation control  

Microsoft Academic Search

We tested four search patterns to identify one or more that consistently resulted in the location of a high percentage of above ground carcasses. Searchers found only 25·4% of placed carcasses. The random search pattern exhibited the lowest search efficiency (i.e. percent carcass recovery), 2·6%. This differed significantly from the other three search patterns (EW transects; EW transects followed by

G. W. Witmer; M. J. Pipas; D. L. Campbell

1995-01-01

280

Visual search in scenes involves selective and non-selective pathways  

PubMed Central

How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

2010-01-01

281

Alzheimer disease constricts the dynamic range of spatial attention in visual search  

Microsoft Academic Search

A cued visual search task was used to examine the dynamic range over which spatial attention aÄects target identification during visual search. Precues varied in validity (valid, invalid, or neutral) and in precision (cue size) of target localization. Participants were ''young-old'' (65-74 years) and ''old-old'' (75-85 years) elderly adults and individuals in the mild stage of dementia of the Alzheimer

Raja Parasuraman; Pamela M. Greenwood; Gene E. Alexander

282

The effects of task difficulty on visual search strategy in virtual 3D displays  

PubMed Central

Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x?y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

2013-01-01

283

Visual search disorders in acute and chronic homonymous hemianopia: lesion effects and adaptive strategies.  

PubMed

Patients with homonymous hemianopia due to occipital brain lesions show disorders of visual search. In everyday life this leads to difficulties in reading and spatial orientation. It is a matter of debate whether these disorders are due to the brain lesion or rather reflect compensatory eye movement strategies developing over time. For the first time, eye movements of acute hemianopic patients (n= 9) were recorded during the first days following stroke while they performed an exploratory visual-search task. Compared to age-matched control subjects their search duration was prolonged due to increased fixations and refixations, that is, repeated scanning of previously searched locations. Saccadic amplitudes were smaller in patients. Right hemianopic patients were more impaired than left hemianopic patients. The number of fixations and refixations did not differ significantly between both hemifields in the patients. Follow-up of one patient revealed changes of visual search over 18 months. By using more structured scanpaths with fewer saccades his search duration decreased. Furthermore, he developed a more efficient eye-movement strategy by making larger but less frequent saccades toward his blind side. In summary, visual-search behavior of acute hemianopic patients differs from healthy control subjects and from chronic hemianopic patients. We conclude that abnormal visual search in acute hemianopic patients is related to the brain lesion. We provide some evidence for adaptive eye-movement strategies developed over time. These adaptive strategies make the visual search more efficient and may help to compensate for the persisting visual-field loss. PMID:19645941

Machner, Björn; Sprenger, Andreas; Sander, Thurid; Heide, Wolfgang; Kimmig, Hubert; Helmchen, Christoph; Kömpf, Detlef

2009-05-01

284

The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search  

ERIC Educational Resources Information Center

Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

Becker, Stefanie I.

2010-01-01

285

Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing  

ERIC Educational Resources Information Center

The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

Aslan, Asli; Aslan, Hurol

2007-01-01

286

The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load  

ERIC Educational Resources Information Center

This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

2012-01-01

287

A SaliencyBased Search Mechanism for Overt and Covert Shifts of Visual Attention  

E-print Network

models of visual search, whether involving overt eye movements or covert shifts of attention, are based orientation, intensity and color informa­ tion, in a purely stimulus­driven manner. The model is applied (including Drosophila; [1] appear to employ a serial computational strategy when inspecting complex visual

Koch, Christof

288

Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search  

ERIC Educational Resources Information Center

In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

Calvo, Manuel G.; Nummenmaa, Lauri

2008-01-01

289

Locally-adaptive and memetic evolutionary pattern search algorithms.  

PubMed

Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096

Hart, William E

2003-01-01

290

Visual cluster analysis and pattern recognition template and methods  

DOEpatents

A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

Osbourn, G.C.; Martinez, R.F.

1999-05-04

291

Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition  

NASA Technical Reports Server (NTRS)

A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

Amador, Jose J (Inventor)

2007-01-01

292

Generalized pattern search algorithms with adaptive precision function evaluations  

SciTech Connect

In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

Polak, Elijah; Wetter, Michael

2003-05-14

293

Performance of visual search tasks from various types of contour information.  

PubMed

A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper. PMID:23456115

Itan, Liron; Yitzhaky, Yitzhak

2013-03-01

294

Visualization and analysis of 3D gene expression patterns in zebrafish using web services  

NASA Astrophysics Data System (ADS)

The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

Potikanond, D.; Verbeek, F. J.

2012-01-01

295

The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search  

E-print Network

The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search Gregory behavior. 1. Introduction The human object detection literature, also known as visual search, has long was visited by spatially directed visual attention [1]. Importantly, the direction of attention to feature

Zelinsky, Greg

296

Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation  

ERIC Educational Resources Information Center

How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

Huang, Tsung-Ren; Grossberg, Stephen

2010-01-01

297

Toward Real-Time Visually Augmented Navigation for Autonomous Search and Inspection of Ship Hulls  

E-print Network

Toward Real-Time Visually Augmented Navigation for Autonomous Search and Inspection of Ship Hulls Abstract This paper reports on current research to automate the task of ship hull inspection and search mapping framework and show how we are now applying that framework to the task of automated ship

Eustice, Ryan

298

Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies  

ERIC Educational Resources Information Center

Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

2010-01-01

299

Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD  

ERIC Educational Resources Information Center

Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

2008-01-01

300

Eye fixation determined by the visual shape and semantic matches in language-mediated visual search   

E-print Network

When participants are presented simultaneously a visual display with spoken input, eye fixation could be determined by a match between representations from spoken input and visual objects. Previous studies found that eye fixation on the semantic...

Shi, Lei

2007-08-24

301

Person, place, and past influence eye movements during visual search  

E-print Network

What is the role of an individual’s past experience in guiding gaze in familiar environments? Contemporary models of search guidance suggest high level scene context is a strong predictor of where observers search in ...

Hidalgo-Sotelo, Barbara Irene

302

Attributes of subtle cues for facilitating visual search in augmented reality.  

PubMed

Goal-oriented visual search is performed when a person intentionally seeks a target in the visual environment. In augmented reality (AR) environments, visual search can be facilitated by augmenting virtual cues in the person's field of view. Traditional use of explicit AR cues can potentially degrade visual search performance due to the creation of distortions in the scene. An alternative to explicit cueing, known as subtle cueing, has been proposed as a clutter-neutral method to enhance visual search in video-see-through AR. However, the effects of subtle cueing are still not well understood, and more research is required to determine the optimal methods of applying subtle cueing in AR. We performed two experiments to investigate the variables of scene clutter, subtle cue opacity, size, and shape on visual search performance. We introduce a novel method of experimentally manipulating the scene clutter variable in a natural scene while controlling for other variables. The findings provide supporting evidence for the subtlety of the cue, and show that the clutter conditions of the scene can be used both as a global classifier, as well as a local performance measure. PMID:24434221

Lu, Weiquan; Duh, Henry Been-Lirn; Feiner, Steven; Zhao, Qi

2014-03-01

303

Scanners and drillers: Characterizing expert visual search through volumetric images  

E-print Network

tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow

304

Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.  

PubMed

In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

Veeraraghavan, Harini; Miller, James V

2014-04-01

305

Visualization Methods for Personal Photo Collections: Browsing and Searching in the PhotoFinder  

Microsoft Academic Search

Software tools for personal photo collection management are proliferating, but they usually have limited searching and browsing functions. We implemented the PhotoFinder prototype to enable non-technical users of personal photo collections to search and browse easily. PhotoFinder provides a set of visual Boolean query interfaces, coupled with dynamic query and query preview features. It gives users powerful search capabilities. Using

Hyunmo Kang; Ben Shneiderman

2000-01-01

306

The relative contribution of scene context and target features to visual search in scenes  

Microsoft Academic Search

Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name.\\u000a Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated\\u000a how scene context and target features affect search performance. We examined two possible sources of information from scene\\u000a context—the scenes gist and the

Monica S. Castelhano; Chelsea Heaven

2010-01-01

307

Scan patterns predict sentence production in the cross-modal processing of visual scenes.  

PubMed

Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what he or she will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval. PMID:22486717

Coco, Moreno I; Keller, Frank

2012-01-01

308

Effects of targets embedded within words in a visual search task  

PubMed Central

Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands). PMID:24855497

Grabbe, Jeremy W.

2014-01-01

309

Structuring Meta-search Research by Design Patterns1 Jrgen Dorn and Tabbasum Naz  

E-print Network

/databases, but send the user's query simultaneously to other search engines, Web directories or to deep Web, collect best with others for searching and viewing. Users can assign tags to their favourite Web pages- search engines. We also introduce design patterns for common components of meta-search engines e.g. query

310

Multimodal signals: enhancement and constraint of song motor patterns by visual display.  

PubMed

Many birds perform visual signals during their learned songs, but little is known about the interrelationship between visual and vocal displays. We show here that male brown-headed cowbirds (Molothrus ater) synchronize the most elaborate wing movements of their display with atypically long silent periods in their song, potentially avoiding adverse biomechanical effects on sound production. Furthermore, expiratory effort for song is significantly reduced when cowbirds perform their wing display. These results show a close integration between vocal and visual displays and suggest that constraints and synergistic interactions between the motor patterns of multimodal signals influence the evolution of birdsong. PMID:14739462

Cooper, Brenton G; Goller, Franz

2004-01-23

311

Neural control of visual search by frontal eye field: effects of unexpected target displacement on visual selection and saccade preparation.  

PubMed

The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation. PMID:19261711

Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G

2009-05-01

312

Patterned-String Tasks: Relation between Fine Motor Skills and Visual-Spatial Abilities in Parrots  

PubMed Central

String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

Krasheninnikova, Anastasia

2013-01-01

313

A Visualization System for Space-Time and Multivariate Patterns (VIS-STAMP)  

PubMed Central

The research reported here integrates computational, visual, and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial, and temporal dimensions via clustering, sorting, and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display, and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and, through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 Contest data set, which contains time-varying, geographically referenced, and multivariate data for technology companies in the US. PMID:17073369

Guo, Diansheng; Chen, Jin; MacEachren, Alan M.; Liao, Ke

2011-01-01

314

Visual and spatial long-term memory: differential pattern of impairments in Williams and Down syndromes.  

PubMed

This purpose of this study was to investigate visual-object and visual-spatial long-term memory (LTM) abilities in individuals with Williams syndrome (WS) and Down syndrome (DS). Four groups comprised of 15 participants were included: WS group (10 males) with a mean chronological age (CA) of 18 years 5 months, SD 6 years 4 months, and mean mental age (MA) of 6 years 8 months, SD 1 year 5 months; WS control group (eight males) comprised of typically developing children (CA mean 6y 7mo, SD 8mo); DS group, (10 males, CA mean 16y 5mo, SD 5y 10mo; MA mean 5y 4mo, SD 8mo); and DS control group (seven males) formed by typically developing children (CA mean 5y 6mo, SD 7mo). In the WS and DS groups mental age and IQ were evaluated with the Form L-M of the Stanford-Binet Intelligence Scale. Results showed that individuals with WS showed decreased learning of visual-spatial material but substantially typical learning of visual-object patterns as compared to a group of mental-age-matched typically developing children. Individuals with DS showed the opposite profile, i.e. typical learning of visual-spatial sequences but impaired learning of visual-object patterns. These results, showing an interesting double dissociation between these two genetic syndromes in the learning of visual-object patterns as opposed to visual-spatial data, support the interpretation of learning disability* as a heterogeneous condition, characterized by potentially very different qualitative profiles of cognitive impairment. PMID:15892372

Vicari, Stefano; Bellucci, Samantha; Carlesimo, Giovanni Augusto

2005-05-01

315

Visual Search is Guided to Categorically Defined Targets  

PubMed Central

To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class. PMID:19500615

Yang, Hyejin; Zelinsky, Gregory J.

2009-01-01

316

Pattern identification or 3D visualization? How best to learn topographic map comprehension  

NASA Astrophysics Data System (ADS)

Science, Technology, Engineering, and Mathematics (STEM) experts employ many representations that novices find hard to use because they require a critical STEM skill, interpreting two-dimensional (2D) diagrams that represent three-dimensional (3D) information. The current research focuses on learning to interpret topographic maps. Understanding topographic maps requires knowledge of how to interpret the conventions of contour lines, and skill in visualizing that information in 3D (e.g. shape of the terrain). Novices find both tasks difficult. The present study compared two interventions designed to facilitate understanding for topographic maps to minimal text-only instruction. The 3D Visualization group received instruction using 3D gestures and models to help visualize three topographic forms. The Pattern Identification group received instruction using pointing and tracing gestures to help identify the contour patterns associated with the three topographic forms. The Text-based Instruction group received only written instruction explaining topographic maps. All participants then completed a measure of topographic map use. The Pattern Identification group performed better on the map use measure than participants in the Text-based Instruction group, but no significant difference was found between the 3D Visualization group and the other two groups. These results suggest that learning to identify meaningful contour patterns is an effective strategy for learning how to comprehend topographic maps. Future research should address if learning strategies for how to interpret the information represented on a diagram (e.g. identify patterns in the contour lines), before trying to visualize the information in 3D (e.g. visualize the 3D structure of the terrain), also facilitates students' comprehension of other similar types of diagrams.

Atit, Kinnari

317

Compensatory strategies following visual search training in patients with homonymous hemianopia: an eye movement study  

PubMed Central

A total of 29 patients with homonymous visual field defects without neglect practised visual search in 20 daily sessions, over a period of 4 weeks. Patients searched for a single randomly positioned target amongst distractors displayed for 3 s. After training patients demonstrated significantly shorter reaction times for search stimuli (Pambakian et al. in J Neurol Neurosurg Psychiatry 75:1443–1448, 2004). In this study, patients achieved improved search efficiency after training by altering their oculomotor behaviour in the following ways: (1) patients directed a higher proportion of fixations into the hemispace containing the target, (2) patients were quicker to saccade into the hemifield containing the target if the initial saccade had been made into the opposite hemifield, (3) patients made fewer transitions from one hemifield to another before locating the target, (4) patients made a larger initial saccade, although the direction of the initial saccade did not change as a result of training, (5) patients acquired a larger visual lobe in their blind hemifield after training. Patients also required fewer saccades to locate the target after training reflecting improved search efficiency. All these changes were confined to the training period and maintained at follow-up. Taken together these results suggest that visual training facilitates the development of specific compensatory eye movement strategies in patients with homonymous visual field defects. PMID:20556413

Pambakian, Alidz L. M.; Kennard, Christopher

2010-01-01

318

Compensatory strategies following visual search training in patients with homonymous hemianopia: an eye movement study.  

PubMed

A total of 29 patients with homonymous visual field defects without neglect practised visual search in 20 daily sessions, over a period of 4 weeks. Patients searched for a single randomly positioned target amongst distractors displayed for 3 s. After training patients demonstrated significantly shorter reaction times for search stimuli (Pambakian et al. in J Neurol Neurosurg Psychiatry 75:1443-1448, 2004). In this study, patients achieved improved search efficiency after training by altering their oculomotor behaviour in the following ways: (1) patients directed a higher proportion of fixations into the hemispace containing the target, (2) patients were quicker to saccade into the hemifield containing the target if the initial saccade had been made into the opposite hemifield, (3) patients made fewer transitions from one hemifield to another before locating the target, (4) patients made a larger initial saccade, although the direction of the initial saccade did not change as a result of training, (5) patients acquired a larger visual lobe in their blind hemifield after training. Patients also required fewer saccades to locate the target after training reflecting improved search efficiency. All these changes were confined to the training period and maintained at follow-up. Taken together these results suggest that visual training facilitates the development of specific compensatory eye movement strategies in patients with homonymous visual field defects. PMID:20556413

Mannan, Sabira K; Pambakian, Alidz L M; Kennard, Christopher

2010-11-01

319

Acute exercise and aerobic fitness influence selective attention during visual search  

PubMed Central

Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

Bullock, Tom; Giesbrecht, Barry

2014-01-01

320

Appears in ASIST 2006, November 3 8, 2006, Austin, TX, USA. Visualizing Meta Search Results: Evaluating the MetaCrystal toolset  

E-print Network

Appears in ASIST 2006, November 3 8, 2006, Austin, TX, USA. 1 Visualizing Meta Search ResultsCrystal toolset has been designed to enable users to visually compare the search results of multiple retrieval reviews visual tools that can be used to visualize the results returned by multiple search

Spoerri, Anselm

321

Journal of Vision (2005) 5, 257-274 http://journalofvision.org/5/3/9/ 257 Visual search for transparency and opacity  

E-print Network

Journal of Vision (2005) 5, 257-274 http://journalofvision.org/5/3/9/ 257 Visual search: transparency, opacity, visual search, cue combination, visual attention, surface perception Introduction How information that would otherwise be available to guide visual search. This class of findings led Nakayama

322

Journal of Vision (2005) 5, 81-92 http://journalofvision.org/5/1/8/ 81 Setting up the target template in visual search  

E-print Network

template in visual search Timothy J. Vickery Department of Psychology, Harvard University, Cambridge, MA is essential in visual search. It biases visual attention to information that matches the target-defining criteria. Extensive research in the past has examined visual search when the target is defined by fixed

Jiang, Yuhong

323

Binocular saccade coordination in reading and visual search: a developmental study in typical reader and dyslexic children  

PubMed Central

Studies dealing with developmental aspects of binocular eye movement behavior during reading are scarce. In this study we have explored binocular strategies during reading and visual search tasks in a large population of dyslexic and typical readers. Binocular eye movements were recorded using a video-oculography system in 43 dyslexic children (aged 8–13) and in a group of 42 age-matched typical readers. The main findings are: (i) ocular motor characteristics of dyslexic children are impaired in comparison to those reported in typical children in reading task; (ii) a developmental effect exists in reading in control children, in dyslexic children the effect of development was observed only on fixation durations; and (iii) ocular motor behavior in the visual search tasks is similar for dyslexic children and for typical readers, except for the disconjugacy during and after the saccade: dyslexic children are impaired in comparison to typical children. Data reported here confirms and expands previous studies on children’s reading. Both reading skills and binocular saccades coordination improve with age in typical readers. The atypical eye movement’s patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an impairment of the ocular motor saccade and vergence systems interaction. PMID:25400559

Seassau, Magali; Gérard, Christophe Loic; Bui-Quoc, Emmanuel; Bucci, Maria Pia

2014-01-01

324

STATIONARY PATTERN ADAPTATION AND THE EARLY COMPONENTS IN HUMAN VISUAL EVOKED POTENTIALS  

EPA Science Inventory

Pattern-onset visual evoked potentials were elicited from humans by sinusoidal gratings of 0.5., 1, 2 and 4 cpd (cycles/degree) following adaptation to a blank field or one of the gratings. The wave forms recorded after blank field adaptation showed an early positive component, P...

325

The Pattern of Ocular Dominance Columns in Macaque Visual Cortex Revealed by a Reduced Silver Stain  

E-print Network

The Pattern of Ocular Dominance Columns in Macaque Visual Cortex Revealed by a Reduced Silver Stain- face, were seen in tangential sections stained with a reduced silver method for normal fibers and were fixed, sectioned tangentially and stained with the silver method. All the lesions- a total of 12 -fell

Hubel, David

326

An Integrated Framework for Visualized and Exploratory Pattern Discovery in Mixed Data  

Microsoft Academic Search

Data mining uncovers hidden, previously unknown, and potentially useful information from large amounts of data. Compared to the traditional statistical and machine learning data analysis techniques, data mining emphasizes providing a convenient and complete environment for the data analysis. In this paper, we propose an integrated framework for visualized, exploratory data clustering, and pattern extraction from mixed data. We further

Chung-chian Hsu; Sheng-hsuan Wang

2006-01-01

327

Visualizing and Discovering Web Navigational Patterns Jiyang Chen, Lisheng Sun, Osmar R. Zaiane, Randy Goebel  

E-print Network

Visualizing and Discovering Web Navigational Patterns Jiyang Chen, Lisheng Sun, Osmar R. Za, lisheng, zaiane, goebel}@cs.ualberta.ca ABSTRACT Web site structures are complex to analyze. Cross-referencing the web structure with navigational behaviour adds to the complexity of the analysis. However

Zaiane, Osmar R.

328

Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder  

ERIC Educational Resources Information Center

This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

2011-01-01

329

Fault diagnosis of internal combustion engines using visual dot patterns of acoustic and vibration signals  

Microsoft Academic Search

An investigation of the fault diagnosis technique in internal combustion engines based on the visual dot pattern of acoustic and vibration signals is presented in this paper. Acoustic emissions and vibration signals are well known as being able to be used for monitoring the conditions of rotating machineries. Most of the conventional methods for fault diagnosis using acoustic and vibration

Jian-Da Wu; Chao-Qin Chuang

2005-01-01

330

Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors  

ERIC Educational Resources Information Center

Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

2011-01-01

331

Visualization of Flow Patterns in the Bonneville 2nd Powerhouse Forebay  

SciTech Connect

Three-dimensional (3D) computational fluid dynamics (CFD) models are increasingly being used to study forebay and tailrace flow systems associated with hydroelectric projects. This paper describes the fundamentals of creating effective 3D data visualizations from CFD model results using a case study from the Bonneville Dam. These visualizations enhance the utility of CFD models by helping the researcher and end user better understand the model results. To develop visualizations for the Bonneville Dam forebay model, we used specialized, but commonly available software and a standard high-end microprocessor workstation. With these tools we were able to compare flow patterns among several operational scenarios by producing a variety of contour, vector, stream-trace, and vortex-core plots. The differences in flow patterns we observed could impact efforts to divert downstream-migrating fish around powerhouse turbines.

Serkowski, John A.; Rakowski, Cynthia L.; Ebner, Laurie L.

2002-12-31

332

Patterns of visual attention to faces and objects in autism spectrum disorder  

PubMed Central

This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns. Individuals with ASD obtained lower scores on measures of face recognition and social-emotional functioning but exhibited similar patterns of visual attention. In individuals with ASD, face recognition performance was associated with social adaptive function. Results highlight heterogeneity in manifestation of social deficits in ASD and suggest that naturalistic assessments are important for quantifying atypicalities in visual attention. PMID:20499148

McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

2011-01-01

333

Visual motion modulates pattern sensitivity ahead, behind, and beside motion  

PubMed Central

Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another – in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input – a ‘predictive summation’. Another possible explanation is a phase sensitive ‘spatial summation’, a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position – it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250

Arnold, Derek H.; Marinovic, Welber; Whitney, David

2014-01-01

334

Sensitivity to object viewpoint and action instructions during search for targets in the lower visual field.  

PubMed

We contrasted visual search for targets presented in prototypical views and targets presented in nonprototypical views, when targets were defined by their names and when they were defined by the action that would normally be performed on them. The likelihood of the first fixation falling on the target was increased for prototypical-view targets falling in the lower visual field. When targets were defined by actions, the durations of fixations were reduced for targets in the lower field. The results are consistent with eye movements in search being affected by representations within the dorsal visual stream, where there is strong representation of the lower visual field. These representations are sensitive to the familiarity or the affordance offered by objects in prototypical views, and they are influenced by action-based templates for targets. PMID:18181790

Forti, Sara; Humphreys, Glyn W

2008-01-01

335

Analysis of microsaccades and pupil dilation reveals a common decisional origin during visual search.  

PubMed

During free viewing visual search, observers often refixate the same locations several times before and after target detection is reported with a button press. We analyzed the rate of microsaccades in the sequence of refixations made during visual search and found two important components. One related to the visual content of the region being fixated; fixations on targets generate more microsaccades and more microsaccades are generated for those targets that are more difficult to disambiguate. The other empathizes non-visual decisional processes; fixations containing the button press generate more microsaccades than those made on the same target but without the button press. Pupil dilation during the same refixations reveals a similar modulation. We inferred that generic sympathetic arousal mechanisms are part of the articulated complex of perceptual processes governing fixational eye movements. PMID:24333280

Privitera, Claudio M; Carney, Thom; Klein, Stanley; Aguilar, Mario

2014-02-01

336

Auditory, tactile, and multisensory cues facilitate search for dynamic visual stimuli  

Microsoft Academic Search

Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’\\u000a visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal\\u000a (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the\\u000a target’s color (Experiments

Mary Kim Ngo; Charles Spence

2010-01-01

337

Repetition Suppression and Multi-Voxel Pattern Similarity Differentially Track Implicit and Explicit Visual Memory  

PubMed Central

Repeated exposure to a visual stimulus is associated with corresponding reductions in neural activity, particularly within visual cortical areas. It has been argued that this phenomenon of repetition suppression is related to increases in processing fluency or implicit memory. However, repetition of a visual stimulus can also be considered in terms of the similarity of the pattern of neural activity elicited at each exposure—a measure that has recently been linked to explicit memory. Despite the popularity of each of these measures, direct comparisons between the two have been limited, and the extent to which they differentially (or similarly) relate to behavioral measures of memory has not been clearly established. In the present study, we compared repetition suppression and pattern similarity as predictors of both implicit and explicit memory. Using functional magnetic resonance imaging, we scanned 20 participants while they viewed and categorized repeated presentations of scenes. Repetition priming (facilitated categorization across repetitions) was used as a measure of implicit memory, and subsequent scene recognition was used as a measure of explicit memory. We found that repetition priming was predicted by repetition suppression in prefrontal, parietal, and occipitotemporal regions; however, repetition priming was not predicted by pattern similarity. In contrast, subsequent explicit memory was predicted by pattern similarity (across repetitions) in some of the same occipitotemporal regions that exhibited a relationship between priming and repetition suppression; however, explicit memory was not related to repetition suppression. This striking double dissociation indicates that repetition suppression and pattern similarity differentially track implicit and explicit learning. PMID:24027275

Chun, Marvin M.; Kuhl, Brice A.

2013-01-01

338

Quantifying the performance limits of human saccadic targeting during visual search  

NASA Technical Reports Server (NTRS)

In previous studies of saccadic targeting, the issue how visually guided saccades to unambiguous targets are programmed and executed has been examined. These studies have found different degrees of guidance for saccades depending on the task and task difficulty. In this study, we use ideal-observer analysis to estimate the visual information used for the first saccade during a search for a target disk in noise. We quantitatively compare the performance of the first saccadic decision to that of the ideal observer (ie absolute efficiency of the first saccade) and to that of the associated final perceptual decision at the end of the search (ie relative efficiency of the first saccade). Our results show, first, that at all levels of salience tested, the first saccade is based on visual information from the stimulus display, and its highest absolute efficiency is approximately 20%. Second, the efficiency of the first saccade is lower than that of the final perceptual decision after active search (with eye movements) and has a minimum relative efficiency of 19% at the lowest level of saliency investigated. Third, we found that requiring observers to maintain central fixation (no saccades allowed) decreased the absolute efficiency of their perceptual decision by up to a factor of two, but that the magnitude of this effect depended on target salience. Our results demonstrate that ideal-observer analysis can be extended to measure the visual information mediating saccadic target-selection decisions during visual search, which enables direct comparison of saccadic and perceptual efficiencies.

Eckstein, M. P.; Beutter, B. R.; Stone, L. S.

2001-01-01

339

Patent semantics : analysis, search and visualization of large text corpora  

E-print Network

Patent Semantics is system for processing text documents by extracting features capturing their semantic content, and searching, clustering, and relating them by those same features. It is set apart from existing methodologies ...

Lucas, Christopher G

2004-01-01

340

Target grouping in visual search for multiple digits.  

PubMed

In four experiments in which participants searched for multiple target digits we hypothesized that search should be fastest when the targets are arranged closely together on the number line without any intervening distractor digits, i.e., the targets form a contiguous and coherent group. In Experiment 1 search performance was better for targets defined by numerical magnitude than parity (i.e., evenness); this result supports our hypothesis but could also be due to the linear separability of targets from distractors or the numerical distance between them. Experiment 2 controlled for target-distractor linear separability and numerical distance, yielding faster search when targets were surrounded by distractors on the number line than when they surrounded distractors. This result is consistent with target contiguity and coherence but also with grouping by similarity of target shapes. Experiment 3 controlled for all three alternative explanations (linear separability, numerical distance, and shape similarity) and search performance was better for contiguous targets than separated targets. In Experiment 4 search performance was better for a coherent target group than one with intervening distractors. Of the possibilities we considered, only the hypothesis based on the contiguity and coherence of the target group on the number line can account for the results from all four experiments. PMID:25156757

Sobel, Kenith V; Puri, Amrita M; Hogan, Jared

2015-01-01

341

Eye movements, visual search and scene memory, in an immersive virtual environment.  

PubMed

Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

2014-01-01

342

Brief Report: Eye Movements During Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD  

PubMed Central

Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements during visual search tasks in high functioning adult men with PDD and a control group. Subjects with PDD were significantly faster than controls in these tasks, replicating earlier findings in children. Eye movement data showed that subjects with PDD made fewer eye movements than controls. No evidence was found for a different search strategy between the groups. The data indicate an enhanced ability to discriminate between stimulus elements in PDD. PMID:17610058

van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

2007-01-01

343

Competition in visual working memory for control of search  

Microsoft Academic Search

Recent perspectives on selective attention posit a central role for visual working memory (VWM) in the top?down control of attention. According to the biased?competition model (Desimone & Duncan, 1995), active maintenance of an object in VWM gives matching (Downing, 2000) or related (Moores, Laiti, & Chelazzi, 2003) objects in the environment a competitive advantage over other objects in gaining access

Paul Downing; Chris Dodds

2004-01-01

344

Effects of laser glare on visual search performance  

Microsoft Academic Search

In the future, aviation aircrews will likely operate in an environment that is saturated with electromagnetic energy emitted from a variety of sources. Lasers serving many applications, such as rangefinding and guidance, will be included in this environment. Eye damage from laser sources is possible, but laser irradiation below levels necessary to produce eye damage may still degrade visually guided

John A. D'Andrea; James C. Knepton; Michael D. Reddix

1992-01-01

345

The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome  

ERIC Educational Resources Information Center

Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

2010-01-01

346

Is Shininess A Basic Feature In Visual Search? Randall S. Birnkrant Jeremy M. Wolfe Melina A. Kunar Matthew Sng  

E-print Network

Is Shininess A Basic Feature In Visual Search? Randall S. Birnkrant Jeremy M. Wolfe Melina A. Kunar guide the deployment of attention in visual search. Do monocular cues to shininess serve as guiding Method Observers (Os) were told to respond quickly and accurately. They were also instructed to minimize

347

The importance of the eye area in face identification abilities and visual search strategies in persons with Asperger syndrome  

Microsoft Academic Search

Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24 adults with Asperger syndrome and matched controls viewed puzzle pieced photos of

Marita Falkmer; Matilda Larsson; Anna Bjällmark; Torbjörn Falkmer

2010-01-01

348

Visual Servoing: A technology in search of an application  

SciTech Connect

Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

Feddema, J.T.

1994-05-01

349

INTRODUCTION Human visual search is an important aspect of  

E-print Network

such as reconnaissance, tracking, information retrieval, aircraft inspection, medical image screening, in- dustrial the respective goals are in conflict (e.g., safety and productivity). More- over, search performance has been: The model also has the capability of supporting assessment. That is, it can be used to assess

Duchowski, Andrew T.

350

Visualizing Document Classification: A Search Aid for the Digital Library.  

ERIC Educational Resources Information Center

Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

2000-01-01

351

Mapping the Color Space of Saccadic Selectivity in Visual Search  

ERIC Educational Resources Information Center

Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

2007-01-01

352

Inter-trial priming does not affect attentional priority in asymmetric visual search  

PubMed Central

Visual search is considerably speeded when the target's characteristics remain constant across successive selections. Here, we investigated whether such inter-trial priming increases the target's attentional priority, by examining whether target repetition reduces search efficiency during serial search. As the study of inter-trial priming requires the target and distractors to exchange roles unpredictably, it has mostly been confined to singleton searches, which typically yield efficient search. We therefore resorted to two singleton searches known to yield relatively inefficient performance, that is, searches in which the target does not pop out. Participants searched for a veridical angry face among neutral ones or vice-versa, either upright or inverted (Experiment 1) or for a Q among Os or vice-versa (Experiment 2). In both experiments, we found substantial intertrial priming that did not improve search efficiency. In addition, intertrial priming was asymmetric and occurred only when the more salient target repeated. We conclude that intertrial priming does not modulate attentional priority allocation and that it occurs in asymmetric search only when the target is characterized by an additional feature that is consciously perceived. PMID:25221536

Amunts, Liana; Yashar, Amit; Lamy, Dominique

2014-01-01

353

Frontal eye field activity enhances object identification during covert visual search.  

PubMed

We investigated the link between neuronal activity in the frontal eye field (FEF) and the enhancement of visual processing associated with covert spatial attention in the absence of eye movements. We correlated activity recorded in the FEF of monkeys manually reporting the identity of a visual search target to performance accuracy and reaction time. Monkeys were cued to the most probable target location with a cue array containing a popout color singleton. Neurons exhibited spatially selective responses for the popout cue stimulus and for the target of the search array. The magnitude of activity related to the location of the cue prior to the presentation of the search array was correlated with trends in behavioral performance across valid, invalid, and neutral cue trial conditions. However, the speed and accuracy of the behavioral report on individual trials were predicted by the magnitude of spatial selectivity related to the target to be identified, not for the spatial cue. A minimum level of selectivity was necessary for target detection and a higher level for target identification. Muscimol inactivation of FEF produced spatially selective perceptual deficits in the covert search task that were correlated with the effectiveness of the inactivation and were strongest on invalid cue trials that require an endogenous attention shift. These results demonstrate a strong functional link between FEF activity and covert spatial attention and suggest that spatial signals from FEF directly influence visual processing during the time that a stimulus to be identified is being processed by the visual system. PMID:19828723

Monosov, Ilya E; Thompson, Kirk G

2009-12-01

354

How You Move Is What You See: Action Planning Biases Selection in Visual Search  

ERIC Educational Resources Information Center

Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection…

Wykowska, Agnieszka; Schubo, Anna; Hommel, Bernhard

2009-01-01

355

Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes  

ERIC Educational Resources Information Center

This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

2014-01-01

356

Target Location Probability Effects in Visual Search: An Effect of Sequential Dependencies  

ERIC Educational Resources Information Center

Target location probability was manipulated in a visual search task. When the target was twice as likely to appear on 1 side of the display as the other, manual button-press response times were faster (Experiment 1A) and first saccades were more frequently directed (Experiment 1B) to the more probable locations. When the target appeared with equal…

Walthew, Carol; Gilchrist, Iain D.

2006-01-01

357

Towards Low Bit Rate Mobile Visual Search with Multiple-Channel Coding  

E-print Network

potential for emerging mobile visual search and augmented reality applications, such as location recognition Communication, Data Compression 1. INTRODUCTION Handheld mobile devices, such as smart camera phones, have great becomes even more crucial when we move towards streaming augmented reality applications. Indeed, this time

Rui, Yong

358

Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays  

ERIC Educational Resources Information Center

Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

Yamani, Yusuke; McCarley, Jason S.

2010-01-01

359

Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks  

E-print Network

Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks Jeremy M. Wolfe and Todd S. Horowitz Brigham and Women's Hospital and Harvard Medical School Michael J. Van Wert, Naomi M with target prevalence, the frequency with which targets are presented across trials. Miss error rates

360

Eye Movement and Visual Search: Are There Elementary Abnormalities in Autism?  

ERIC Educational Resources Information Center

Although atypical eye gaze is commonly observed in autism, little is known about underlying oculomotor abnormalities. Our review of visual search and oculomotor systems in the healthy brain suggests that relevant networks may be partially impaired in autism, given regional abnormalities known from neuroimaging. However, direct oculomotor evidence…

Brenner, Laurie A.; Turner, Katherine C.; Muller, Ralph-Axel

2007-01-01

361

"Self pop-out": agency enhances self-recognition in visual search.  

PubMed

In real-life situations, we are often required to recognize our own movements among movements originating from other people. In social situations, these movements are often correlated (for example, when dancing or walking with others) adding considerable difficulty to self-recognition. Studies from visual search have shown that visual attention can selectively highlight specific features to make them more salient. Here, we used a novel visual search task employing virtual reality and motion tracking to test whether visual attention can use efferent information to enhance self-recognition of one's movements among four or six moving avatars. Active movements compared to passive movements allowed faster recognition of the avatar moving like the subject. Critically, search slopes were flat for the active condition but increased for passive movements, suggesting efficient search for active movements. In a second experiment, we tested the effects of using the participants' own movements temporally delayed as distractors in a self-recognition discrimination task. We replicated the results of the first experiment with more rapid self-recognition during active trials. Importantly, temporally delayed distractors increased reaction times despite being more perceptually different than the spatial distractors. The findings demonstrate the importance of agency in self-recognition and self-other discrimination from movement in social settings. PMID:23665753

Salomon, R; Lim, M; Kannape, O; Llobera, J; Blanke, O

2013-07-01

362

LearnIT: Enhanced Search and Visualization of IT Maher Rahmouni1  

E-print Network

is a project centric view over a database of completed or ongoing projects, allowing a project manager to findLearnIT: Enhanced Search and Visualization of IT Projects Maher Rahmouni1 , Marianne Hickey1 and Claudio Bartolini2 , 1 HP Labs, Long Down Avenue, Bristol, BS34 8QZ, UK 2 HP Labs, 1501 Page Mill Rd

Boyer, Edmond

363

Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks  

ERIC Educational Resources Information Center

In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

2007-01-01

364

What Are the Shapes of Response Time Distributions in Visual Search?  

ERIC Educational Resources Information Center

Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

2011-01-01

365

Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues  

NASA Astrophysics Data System (ADS)

Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

2014-06-01

366

Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns  

NASA Technical Reports Server (NTRS)

Various flow visualization techniques were used to define the seondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious. Previously announced in STAR as N83-14435

Gaugler, R. E.; Russell, L. M.

1984-01-01

367

Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns  

NASA Technical Reports Server (NTRS)

Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

Gaugler, R. E.; Russell, L. M.

1983-01-01

368

Display format and highlight validity effects on search performance using complex visual displays  

NASA Technical Reports Server (NTRS)

Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

Donner, Kimberly A.; Mckay, Tim; O'Brien, Kevin M.; Rudisill, Marianne

1991-01-01

369

Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns.  

PubMed

The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects. PMID:25169944

Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

2014-11-01

370

Giant honeybees ( Apis dorsata) mob wasps away from the nest by directed visual patterns  

NASA Astrophysics Data System (ADS)

The open nesting behaviour of giant honeybees ( Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

2014-11-01

371

Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns  

NASA Astrophysics Data System (ADS)

The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

2014-08-01

372

Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory  

PubMed Central

Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

2012-01-01

373

Comparison of visually evoked local field potentials in isolated turtle brain: Patterned versus blank stimulation  

Microsoft Academic Search

Isolated turtle brain\\/eye preparation has recently been used as a bloodless animal model for detecting the magnetic resonance imaging (MRI) signal changes produced by visually evoked neuronal currents. The present work aims to determine whether checkerboard-patterned or full field flash (blank) stimulation should be used in order to achieve stronger neuronal responses in turtle brain\\/eye preparation. The knowledge gained in

Qingfei Luo; Huo Lu; Hanbing Lu; Yihong Yang; Jia-Hong Gao

2010-01-01

374

Does focused endogenous attention prevent attentional capture in pop-out visual search?  

PubMed Central

To investigate whether salient visual singletons capture attention when they appear outside the current endogenous attentional focus, we measured the N2pc component as a marker of attentional capture in a visual search task where target or nontarget singletons were presented at locations previously cued as task-relevant, or in the uncued irrelevant hemifield. In two experiments, targets were either defined by colour, or by a combination of colour and shape. The N2pc was elicited both for attended singletons and for singletons on the uncued side, demonstrating that focused endogenous attention cannot prevent attentional capture by salient unattended visual events. However, N2pc amplitudes were larger for attended and unattended singletons that shared features with the current target, suggesting that top-down task sets modulate the capacity of visual singletons to capture attention both within and outside the current attentional focus. PMID:19473304

Seiss, Ellen; Kiss, Monika; Eimer, Martin

2009-01-01

375

Adaptive but Non-Optimal Visual Search Behavior in Highlighted Displays Franklin P. Tamborello, II, Michael D. Byrne {tambo, byrne}@rice.edu  

E-print Network

Adaptive but Non-Optimal Visual Search Behavior in Highlighted Displays Franklin P. Tamborello, II, or highlighting, can speed performance in a visual search task. But designers of interfaces cannot always easily model of visual search in highlighted displays predicts. Users' sensitivity to highlighting

Byrne, Mike

376

Case study of visualizing global user download patterns using Google Earth and NASA World Wind  

SciTech Connect

Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASAWorld Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the hot spot areas of research. Most importantly, our methods demonstrate an easy way to geovisualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

2012-10-09

377

The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search  

ERIC Educational Resources Information Center

The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

2011-01-01

378

Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements  

ERIC Educational Resources Information Center

When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

Hout, Michael C.; Goldinger, Stephen D.

2012-01-01

379

No attentional capture for simple visual search: evidence for a dual-route account.  

PubMed

An enduring question in visual attention research is whether unattended objects are subject to perceptual processing. The traditional view suggests that, whereas focal attention is required for the processing of complex features or for individuating objects, it is not required for detecting basic features. However, other models suggest that detecting basic features may be no different from object identification and also require focal attention. In the present study, we approach this problem by measuring the effect of attentional capture in simple and compound visual search tasks. To make sure measurements did not reflect strategic components of the tasks, we measured accuracy with brief displays. Results show that attentional capture influenced only compound but not basic feature searches, suggestive of a distinction between attentional requirements of the 2 tasks. We discuss our findings, together with recent results of top-down word cue effects and dimension-specific intertrial effects, in terms of the dual-route account for visual search, which suggests that the task that is being completed determines whether search is based on attentive or preattentive mechanisms. PMID:25181370

Chan, Louis K H; Hayward, William G

2014-12-01

380

Visual Ability and Searching Behavior of Adult Laricobius nigrinus, a Hemlock Woolly Adelgid Predator  

PubMed Central

Very little is known about the searching behavior and sensory cues that Laricobius spp. (Coleoptera: Derodontidae) predators use to locate suitable habitats and prey, which limits our ability to collect and monitor them for classical biological control of adelgids (Hemiptera: Adelgidae). The aim of this study was to examine the visual ability and the searching behavior of newly emerged L. nigrinus Fender, a host-specific predator of the hemlock woolly adelgid, Adelges tsugae Annand (Hemiptera: Phylloxeroidea: Adelgidae). In a laboratory bioassay, individual adults attempting to locate an uninfested eastern hemlock seedling under either light or dark conditions were observed in an arena. In another bioassay, individual adults searching for prey on hemlock seedlings (infested or uninfested) were continuously video-recorded. Beetles located and began climbing the seedling stem in light significantly more than in dark, indicating that vision is an important sensory modality. Our primary finding was that searching behavior of L. nigrinus, as in most species, was related to food abundance. Beetles did not fly in the presence of high A. tsugae densities and flew when A. tsugae was absent, which agrees with observed aggregations of beetles on heavily infested trees in the field. At close range of prey, slow crawling and frequent turning suggest the use of non-visual cues such as olfaction and contact chemoreception. Based on the beetles' visual ability to locate tree stems and their climbing behavior, a bole trap may be an effective collection and monitoring tool. PMID:22220637

Mausel, D.L.; Salom, S.M.; Kok, L.T.

2011-01-01

381

Differential Roles of the Fan-Shaped Body and the Ellipsoid Body in "Drosophila" Visual Pattern Memory  

ERIC Educational Resources Information Center

The central complex is a prominent structure in the "Drosophila" brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for "Drosophila" visual pattern memory. However, little is known…

Pan, Yufeng; Zhou, Yanqiong; Guo, Chao; Gong, Haiyun; Gong, Zhefeng; Liu, Li

2009-01-01

382

Implications of sustained and transient channels for theories of visual pattern masking, saccadic suppression, and information processing  

Microsoft Academic Search

Reviews the visual masking literature in the context of known neurophysiological and psychophysical properties of the visual system's spatiotemporal response. The literature indicates that 3 consistent and typical pattern masking effects––(a) Type B forward or paracontrast, (b) Type B backward or metacontrast, and (c) Type A forward and backward––can be explained in terms of 3 simple sensory processes. It is

Bruno G. Breitmeyer; Leo Ganz

1976-01-01

383

Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory  

ERIC Educational Resources Information Center

Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

2010-01-01

384

Adding a Visualization Feature to Web Search Engines: It’s Time  

SciTech Connect

Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

Wong, Pak C.

2008-11-11

385

Functional properties of sub-bands of oscillatory brain waves to pattern visual stimulation in man.  

PubMed

The scalp recorded transient visual evoked potential (VEP) represents the massed activity of a large number of neurons of the human visual cortex. Animal studies show that intracerebrally-recorded high frequency electrical activity represents binding between neurons participating in a cooperative response. We evaluated the relationship between scalp recorded high frequency activity and transient VEPs elicited by a repetitive (grating) pattern. Stimuli were 1 and 4 cycles/degree sinusoidal gratings, presented in an on/off mode. Following conventional averaging, the discrete wavelet transform (DWT) was applied. Multi-resolution decomposition was used to divide the responses into 6 orthogonal frequency bands. The results show that high frequency oscillatory activity in the beta and gamma frequency range is closely related in time to the N70 peak of the simultaneous VEP. Power in both bands is modulated by spatial frequency. Beta range response to hemifield stimulation recorded over a chain of electrodes over the occipital area lateralizes in the same manner as N70, while gamma range activity is insensitive to lateralization and is more closely linked to foveal stimulation. This dissociation between beta and gamma range activity suggests that different bands of high frequency oscillatory activity in humans, linked to visual stimulation, may represent different aspects of visual processing. PMID:10680560

Tzelepi, A; Bezerianos, T; Bodis-Wollner, I

2000-02-01

386

Comparison of visually evoked local field potentials in isolated turtle brain: patterned versus blank stimulation.  

PubMed

Isolated turtle brain/eye preparation has recently been used as a bloodless animal model for detecting the magnetic resonance imaging (MRI) signal changes produced by visually evoked neuronal currents. The present work aims to determine whether checkerboard-patterned or full field flash (blank) stimulation should be used in order to achieve stronger neuronal responses in turtle brain/eye preparation. The knowledge gained in this study is essential for optimizing the visual stimulation methods in functional neuroimaging studies using turtle brain/eye preparation. In this study, visually evoked local field potentials (LFPs) were measured and compared in turtle visual cortex and optic tectum elicited by checkerboard and full field flash stimuli with three different inter-stimulus intervals (ISIs=5, 10, and 16s). It was found that the behavior of neuronal adaptation in the cortical and tectal LFP signals for checkerboard stimulation was comparable to flash stimulation. In addition, there was no significant difference in the LFP peak amplitudes (ISI=16s) between these two stimuli. These results indicate that the intensity of neuronal responses to checkerboard is comparable to flash stimulation. These two stimulation methods should be equivalent in functional neuroimaging studies using turtle brain/eye preparation. PMID:20034520

Luo, Qingfei; Lu, Huo; Lu, Hanbing; Yang, Yihong; Gao, Jia-Hong

2010-03-15

387

The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes  

NASA Technical Reports Server (NTRS)

Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

Remington, R.; Williams, D.

1984-01-01

388

On the selection and evaluation of visual display symbology Factors influencing search and identification times  

NASA Technical Reports Server (NTRS)

Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

Remington, Roger; Williams, Douglas

1986-01-01

389

Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging  

PubMed Central

Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s?.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

2012-01-01

390

"Curing"the prevalence effect in visual search Brigham and Women's Hospital  

E-print Network

"Curing"the prevalence effect in visual search Brigham and Women's Hospital 1 V E R I TAS Harvard Medical School 2 1 1,2 1,2 Michael J.Van Wert Todd S.Horowitz Jeremy M.Wolfe The Prevalence Effect In many rates are 2-3 times higher at low (1-2%) target prevalence than at high (50%) prevalence (Wolfe et

391

The Accuracy of Saccadic and Perceptual Decisions in Visual Search  

NASA Technical Reports Server (NTRS)

Saccadic eye movements during search for a target embedded in noise are suboptimally guided by information about target location. Our goal is to compare the spatial information used to guide the saccades with that used for the perceptual decision. Three observers were asked to determine the location of a bright disk (diameter = 21 min) in white noise (signal-to-noise ratio = 4.2) from among 10 possible locations evenly spaced at 5.9 deg eccentricity. In the first of four conditions, observers used natural eye movements. In the three remaining conditions, observers fixated a central cross at all times. The fixation conditions consisted of three different presentation times (100, 200, 300 msec), each followed by a mask. Eye-position data were collected, with a resolution of (approximately) 0.2 deg. In the natural viewing condition, we measured. the accuracy with respect to the target and the latency of the first saccade. In the fixation conditions, we discarded trials in which observers broke fixation. Perceptual performance was computed for all conditions. Averaged across observers, the first saccade was correct (closest to the target location) for 56 +/- (SD) % of trials (chance = 10 %) and occurred after a latency of 313 +/- 56 msec. Perceptual performance averaged 53 +/- 4, 63 +/- 4, 65 +/- 2 % correct at 100, 200, and 300 msec, respectively. For the signal-to-noise ratio used, at the time of initiation of the first saccade, there is little difference between the amount of information about target location available to the perceptual and saccadic systems.

Eckstein, Miguel P.; Stone, Leland S.; Beutter, B. B.; Stone, Leland S. (Technical Monitor)

1997-01-01

392

Adaptation in the Visual Cortex: Influence of Membrane Trajectory and Neuronal Firing Pattern on Slow Afterpotentials  

PubMed Central

The input/output relationship in primary visual cortex neurons is influenced by the history of the preceding activity. To understand the impact that membrane potential trajectory and firing pattern has on the activation of slow conductances in cortical neurons we compared the afterpotentials that followed responses to different stimuli evoking similar numbers of action potentials. In particular, we compared afterpotentials following the intracellular injection of either square or sinusoidal currents lasting 20 seconds. Both stimuli were intracellular surrogates of different neuronal responses to prolonged visual stimulation. Recordings from 99 neurons in slices of visual cortex revealed that for stimuli evoking an equivalent number of spikes, sinusoidal current injection activated a slow afterhyperpolarization of significantly larger amplitude (8.5±3.3 mV) and duration (33±17 s) than that evoked by a square pulse (6.4±3.7 mV, 28±17 s; p<0.05). Spike frequency adaptation had a faster time course and was larger during plateau (square pulse) than during intermittent (sinusoidal) depolarizations. Similar results were obtained in 17 neurons intracellularly recorded from the visual cortex in vivo. The differences in the afterpotentials evoked with both protocols were abolished by removing calcium from the extracellular medium or by application of the L-type calcium channel blocker nifedipine, suggesting that the activation of a calcium-dependent current is at the base of this afterpotential difference. These findings suggest that not only the spikes, but the membrane potential values and firing patterns evoked by a particular stimulation protocol determine the responses to any subsequent incoming input in a time window that spans for tens of seconds to even minutes. PMID:25380063

Descalzo, Vanessa F.; Gallego, Roberto; Sanchez-Vives, Maria V.

2014-01-01

393

Adaptation in the visual cortex: influence of membrane trajectory and neuronal firing pattern on slow afterpotentials.  

PubMed

The input/output relationship in primary visual cortex neurons is influenced by the history of the preceding activity. To understand the impact that membrane potential trajectory and firing pattern has on the activation of slow conductances in cortical neurons we compared the afterpotentials that followed responses to different stimuli evoking similar numbers of action potentials. In particular, we compared afterpotentials following the intracellular injection of either square or sinusoidal currents lasting 20 seconds. Both stimuli were intracellular surrogates of different neuronal responses to prolonged visual stimulation. Recordings from 99 neurons in slices of visual cortex revealed that for stimuli evoking an equivalent number of spikes, sinusoidal current injection activated a slow afterhyperpolarization of significantly larger amplitude (8.5 ± 3.3 mV) and duration (33 ± 17 s) than that evoked by a square pulse (6.4 ± 3.7 mV, 28 ± 17 s; p<0.05). Spike frequency adaptation had a faster time course and was larger during plateau (square pulse) than during intermittent (sinusoidal) depolarizations. Similar results were obtained in 17 neurons intracellularly recorded from the visual cortex in vivo. The differences in the afterpotentials evoked with both protocols were abolished by removing calcium from the extracellular medium or by application of the L-type calcium channel blocker nifedipine, suggesting that the activation of a calcium-dependent current is at the base of this afterpotential difference. These findings suggest that not only the spikes, but the membrane potential values and firing patterns evoked by a particular stimulation protocol determine the responses to any subsequent incoming input in a time window that spans for tens of seconds to even minutes. PMID:25380063

Descalzo, Vanessa F; Gallego, Roberto; Sanchez-Vives, Maria V

2014-01-01

394

Spatial ranking strategy and enhanced peripheral vision discrimination optimize performance and efficiency of visual sequential search.  

PubMed

Visual sequential search might use a peripheral spatial ranking of the scene to put the next target of the sequence in the correct order. This strategy, indeed, might enhance the discriminative capacity of the human peripheral vision and spare neural resources associated with foveation. However, it is not known how exactly the peripheral vision sustains sequential search and whether the sparing of neural resources has a cost in terms of performance. To elucidate these issues, we compared strategy and performance during an alpha-numeric sequential task where peripheral vision was modulated in three different conditions: normal, blurred, or obscured. If spatial ranking is applied to increase the peripheral discrimination, its use as a strategy in visual sequencing should differ according to the degree of discriminative information that can be obtained from the periphery. Moreover, if this strategy spares neural resources without impairing the performance, its use should be associated with better performance. We found that spatial ranking was applied when peripheral vision was fully available, reducing the number and time of explorative fixations. When the periphery was obscured, explorative fixations were numerous and sparse; when the periphery was blurred, explorative fixations were longer and often located close to the items. Performance was significantly improved by this strategy. Our results demonstrated that spatial ranking is an efficient strategy adopted by the brain in visual sequencing to highlight peripheral detection and discrimination; it reduces the neural cost by avoiding unnecessary foveations, and promotes sequential search by facilitating the onset of a new saccade. PMID:24893753

Veneri, Giacomo; Pretegiani, Elena; Fargnoli, Francesco; Rosini, Francesca; Vinciguerra, Claudia; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra

2014-09-01

395

Selective learning of spatial configuration and object identity in visual search.  

PubMed

To conduct an efficient visual search, visual attention must be guided to a target appropriately. Previous studies have suggested that attention can be quickly guided to a target when the spatial configurations of search objects or the object identities have been repeated. This phenomenon is termed contextual cuing. In this study, we investigated the effect of learning spatial configurations, object identities, and a combination of both configurations and identities on visual search. The results indicated that participants could learn the contexts of spatial configurations, but not of object identities, even when both configurations and identities were completely correlated (Experiment 1). On the other hand, when only object identities were repeated, an effect of identity learning could be observed (Experiment 2). Furthermore, an additive effect of configuration learning and identity learning was observed when, in some trials, each context was the relevant cue for predicting the target (Experiment 3). Participants could learn only the context that was associated with target location (Experiment 4). These findings indicate that when multiple contexts are redundant, contextual learning occurs selectively, depending on the predictability of the target location. PMID:15129750

Endo, Nobutaka; Takeda, Yuji

2004-02-01

396

Neuronal activity in superior colliculus signals both stimulus identity and saccade goals during visual conjunction search.  

PubMed

Although we know that the process of saccade target selection is reflected in the activity of sensory-motor neurons within saccade executive centers, the description of this process at the neural level has yet to fully account for all selection outcomes. The current study sought to determine how neuronal activity in the intermediate layers of the superior colliculus (SC) determines correct saccade target selection by examining the activity of visuomovement neurons during both correct and error trials of monkeys performing a relatively difficult visual conjunction search task. We found that a stimulus presented in a neuron's response field, but not foveated, was associated with greater activity if it was the search target instead of a distractor, indicating that SC neurons could represent stimulus identity. Nevertheless, activity was greater when a saccade was made to a stimulus than when it was not, further implicating these neurons in selecting the saccade goal. Together with the related observation that, when the target fell in their response fields, SC neurons discharged significantly more if the monkey correctly selected it instead of a distractor, these results suggest that visual stimuli are selected when these neurons reach a critical activation level. Our findings show that the outcome of all visual search trials, regardless of the stimulus being selected, is predicted by SC neuronal activity. PMID:18217855

Shen, Kelly; Paré, Martin

2007-01-01

397

Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.  

PubMed

Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

Chassy, Philippe; Gobet, Fernand

2013-01-01

398

Visual Search in Ecological and Non-Ecological Displays: Evidence for a Non-Monotonic Effect of Complexity on Performance  

PubMed Central

Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a “pop out” effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

Chassy, Philippe; Gobet, Fernand

2013-01-01

399

Modeling the Effect of Selection History on Pop-Out Visual Search  

PubMed Central

While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

Tseng, Yuan-Chi; Glaser, Joshua I.; Caddigan, Eamon; Lleras, Alejandro

2014-01-01

400

Serial, Covert, Shifts of Attention during Visual Search are Reflected by the Frontal Eye Fields and Correlated with Population Oscillations  

E-print Network

Attention regulates the flood of sensory information into a manageable stream, and so understanding how attention is controlled is central to understanding cognition. Competing theories suggest visual search involves serial ...

Buschman, Timothy J.

401

The Development of Visual Search in Infants and Very Young Children.  

ERIC Educational Resources Information Center

Trained 1- to 3-year-olds to touch a video screen displaying a unique target and appearing among varying numbers of distracters; correct responses triggered a sound and four animated objects on the screen. Found that children's reaction time patterns resembled those from adults in corresponding search tasks, suggesting that basic perceptual…

Gerhardstein, Peter; Rovee-Collier, Carolyn

2002-01-01

402

Selective pattern enhancement processing for digital mammography, algorithms, and the visual evaluation  

NASA Astrophysics Data System (ADS)

In order to enhance the micro calcifications selectively without enhancing noises, PEM (Pattern Enhancement Processing for Mammography) has been developed by utilizing not only the frequency information but also the structural information of the specified objects. PEM processing uses two structural characteristics i.e. steep edge structure and low-density isolated-point structure. The visual evaluation of PEM processing was done using two different resolution CR mammography images. The enhanced image by PEM processing was compared with the image without enhancement, and the conventional usharp-mask processed image. In the PEM processed image, an increase of noises due to enhancement was suppressed as compared with that in the conventional unsharp-mask processed image. The evaluation using CDMAM phantom showed that PEM processing improved the detection performance of a minute circular pattern. By combining PEM processing with the low and medium frequency enhancement processing, both mammary glands and micro calcifications are clearly enhanced.

Yamada, Masahiko; Shimura, Kazuo; Nagata, Takefumi

2003-05-01

403

Searching for patterns in remote sensing image databases using neural networks  

NASA Technical Reports Server (NTRS)

We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

Paola, Justin D.; Schowengerdt, Robert A.

1995-01-01

404

The dynamics of attentional sampling during visual search revealed by Fourier analysis of periodic noise interference.  

PubMed

What are the temporal dynamics of perceptual sampling during visual search tasks, and how do they differ between a difficult (or inefficient) and an easy (or efficient) task? Does attention focus intermittently on the stimuli, or are the stimuli processed continuously over time? We addressed these questions by way of a new paradigm using periodic fluctuations of stimulus information during a difficult (color-orientation conjunction) and an easy (+ among Ls) search task. On each stimulus, we applied a dynamic visual noise that oscillated at a given frequency (2-20 Hz, 2-Hz steps) and phase (four cardinal phase angles) for 500 ms. We estimated the dynamics of attentional sampling by computing an inverse Fourier transform on subjects' d-primes. In both tasks, the sampling function presented a significant peak at 2 Hz; we showed that this peak could be explained by nonperiodic search strategies such as increased sensitivity to stimulus onset and offset. Specifically in the difficult task, however, a second, higher-frequency peak was observed at 9 to 10 Hz, with a similar phase for all subjects; this isolated frequency component necessarily entails oscillatory attentional dynamics. In a second experiment, we presented difficult search arrays with dynamic noise that was modulated by the previously obtained grand-average attention sampling function or by its converse function (in both cases omitting the 2 Hz component to focus on genuine oscillatory dynamics). We verified that performance was higher in the latter than in the former case, even for subjects who had not participated in the first experiment. This study supports the idea of a periodic sampling of attention during a difficult search task. Although further experiments will be needed to extend these findings to other search tasks, the present report validates the usefulness of this novel paradigm for measuring the temporal dynamics of attention. PMID:24525262

Dugué, Laura; Vanrullen, Rufin

2014-01-01

405

Gaze and visual search strategies of children with Asperger syndrome/high functioning autism viewing a magic trick.  

PubMed

Abstract Objective: To examine visual search patterns and strategies used by children with and without Asperger syndrome/high functioning autism (AS/HFA) while watching a magic trick. Limited responsivity to gaze cues is hypothesised to contribute to social deficits in children with AS/HFA. Methods: Twenty-one children with AS/HFA and 31 matched peers viewed a video of a gaze-cued magic trick twice. Between the viewings, they were informed about how the trick was performed. Participants' eye movements were recorded using a head-mounted eye-tracker. Results: Children with AS/HFA looked less frequently and had shorter fixation on the magician's direct and averted gazes during both viewings and more frequently at not gaze-cued objects and on areas outside the magician's face. After being informed of how the trick was conducted, both groups made fewer fixations on gaze-cued objects and direct gaze. Conclusions: Information may enhance effective visual strategies in children with and without AS/HFA. PMID:24866104

Joosten, Annette; Girdler, Sonya; Albrecht, Matthew A; Horlin, Chiara; Falkmer, Marita; Leung, Denise; Ordqvist, Anna; Fleischer, Håkan; Falkmer, Torbjörn

2014-05-27

406

Visual search in noise: revealing the influence of structural cues by gaze-contingent classification image analysis.  

PubMed

Visual search experiments have usually involved the detection of a salient target in the presence of distracters against a blank background. In such high signal-to-noise scenarios, observers have been shown to use visual cues such as color, size, and shape of the target to program their saccades during visual search. The degree to which these features affect search performance is usually measured using reaction times and detection accuracy. We asked whether human observers are able to use target features to succeed in visual search tasks in stimuli with very low signal-to-noise ratios. Using the classification image analysis technique, we investigated whether observers used structural cues to direct their fixations as they searched for simple geometric targets embedded at very low signal-to-noise ratios in noise stimuli that had the spectral characteristics of natural images. By analyzing properties of the noise stimulus at observers' fixations, we were able to reveal idiosyncratic, target-dependent features used by observers in our visual search task. We demonstrate that even in very noisy displays, observers do not search randomly, but in many cases they deploy their fixations to regions in the stimulus that resemble some aspect of the target in their local image features. PMID:16889476

Rajashekar, Umesh; Bovik, Alan C; Cormack, Lawrence K

2006-01-01

407

The effect of flower-like and non-flower-like visual properties on choice of unrewarding patterns by bumblebees  

NASA Astrophysics Data System (ADS)

How do distinct visual stimuli help bumblebees discover flowers before they have experienced any reward outside of their nest? Two visual floral properties, type of a pattern (concentric vs radial) and its position on unrewarding artificial flowers (central vs peripheral on corolla), were manipulated in two experiments. Both visual properties showed significant effects on floral choice. When pitted against each other, pattern was more important than position. Experiment 1 shows a significant effect of concentric pattern position, and experiment 2 shows a significant preference towards radial patterns regardless of their position. These results show that the presence of markings at the center of a flower are not so important as the presence of markings that will direct bees there.

Orbán, Levente L.; Plowright, Catherine M. S.

2013-07-01

408

Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate  

PubMed Central

Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and 48,084 for “Egor Bychkov”, compared to 53,403 for “Khimki” in Yandex. We found Google potentially provides timely search results, whereas Yandex provides more accurate geographic localization. The correlation was moderate to strong between search terms representing the Bychkov episode and terms representing salient drug issues in Yandex–“illicit drug treatment” (r s = .90, P < .001), "illicit drugs" (r s = .76, P < .001), and "drug addiction" (r s = .74, P < .001). Google correlations were weaker or absent–"illicit drug treatment" (r s = .12, P = .58), “illicit drugs ” (r s = -0.29, P = .17), and "drug addiction" (r s = .68, P < .001). Conclusions This study contributes to the methodological literature on the analysis of search patterns for public health. This paper investigated the relationship between Google and Yandex, and contributed to the broader methods literature by highlighting both the potential and limitations of these two search providers. We believe that Yandex Wordstat is a potentially valuable, and underused data source for researchers working on Russian-related illicit drug policy and other public health problems. The Russian Federation, with its large, geographically dispersed, and politically engaged online population presents unique opportunities for studying the evolving influence of the Internet on politics and policy, using low cost methods resilient against potential increases in censorship. PMID:23238600

Gillespie, James A; Quinn, Casey

2012-01-01

409

Spatial Attention Evokes Similar Activation Patterns for Visual and Auditory Stimuli  

PubMed Central

Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities. PMID:19400684

Smith, David V.; Davis, Ben; Niu, Kathy; Healy, Eric W.; Bonilha, Leonardo; Fridriksson, Julius; Morgan, Paul S.; Rorden, Chris

2010-01-01

410

Structural connectivity patterns associated with the putative visual word form area and children's reading ability.  

PubMed

With the advent of neuroimaging techniques, especially functional MRI (fMRI), studies have mapped brain regions that are associated with good and poor reading, most centrally a region within the left occipito-temporal/fusiform region (L-OT/F) often referred to as the visual word form area (VWFA). Despite an abundance of fMRI studies of the putative VWFA, research about its structural connectivity has just started. Provided that the putative VWFA may be connected to distributed regions in the brain, it remains unclear how this network is engaged in constituting a well-tuned reading circuitry in the brain. Here we used diffusion MRI to study the structural connectivity patterns of the putative VWFA and surrounding areas within the L-OT/F in children with typically developing (TD) reading ability and with word recognition deficits (WRD; sometimes referred to as dyslexia). We found that L-OT/F connectivity varied along a posterior-anterior gradient, with specific structural connectivity patterns related to reading ability in the ROIs centered upon the putative VWFA. Findings suggest that the architecture of the putative VWFA connectivity is fundamentally different between TD and WRD, with TD showing greater connectivity to linguistic regions than WRD, and WRD showing greater connectivity to visual and parahippocampal regions than TD. Findings thus reveal clear structural abnormalities underlying the functional abnormalities in the putative VWFA in WRD. PMID:25152466

Fan, Qiuyun; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E

2014-10-24

411

Pattern-Dependent Response Modulations in Motion-Sensitive Visual Interneurons—A Model Study  

PubMed Central

Even if a stimulus pattern moves at a constant velocity across the receptive field of motion-sensitive neurons, such as lobula plate tangential cells (LPTCs) of flies, the response amplitude modulates over time. The amplitude of these response modulations is related to local pattern properties of the moving retinal image. On the one hand, pattern-dependent response modulations have previously been interpreted as 'pattern-noise', because they deteriorate the neuron's ability to provide unambiguous velocity information. On the other hand, these modulations might also provide the system with valuable information about the textural properties of the environment. We analyzed the influence of the size and shape of receptive fields by simulations of four versions of LPTC models consisting of arrays of elementary motion detectors of the correlation type (EMDs). These models have previously been suggested to account for many aspects of LPTC response properties. Pattern-dependent response modulations decrease with an increasing number of EMDs included in the receptive field of the LPTC models, since spatial changes within the visual field are smoothed out by the summation of spatially displaced EMD responses. This effect depends on the shape of the receptive field, being the more pronounced - for a given total size - the more elongated the receptive field is along the direction of motion. Large elongated receptive fields improve the quality of velocity signals. However, if motion signals need to be localized the velocity coding is only poor but the signal provides – potentially useful – local pattern information. These modelling results suggest that motion vision by correlation type movement detectors is subject to uncertainty: you cannot obtain both an unambiguous and a localized velocity signal from the output of a single cell. Hence, the size and shape of receptive fields of motion sensitive neurons should be matched to their potential computational task. PMID:21760894

Meyer, Hanno Gerd; Lindemann, Jens Peter; Egelhaaf, Martin

2011-01-01

412

Facilitated Visual Search at Low Color Contrast Kenneth!Knoblauch 1,2 , V.!Mazoyer 1 , F.!Koenig 3 , F.!Vital-Durand 1  

E-print Network

1 Facilitated Visual Search at Low Color Contrast Kenneth!Knoblauch 1,2 , V.!Mazoyer 1 , F contrast in visual search in observers with low vision. In a typical experiment, the reaction time e-mail: knoblauc@vision.univ-st-etienne.fr Abstract. The influence of color contrast on visual

Paris-Sud XI, Université de

413

Non-Searching for Jobs: Patterns and Payoffs to Non-Searching Across the Work Career  

Microsoft Academic Search

While conventional wisdom suggests that that getting jobs is more about “who you know” than “what you know”, the empirical evidence on job searching shows that people who rely on their personal contacts when searching for a job generally do not receive any benefits over people who use more formal job search methods. Since the most advantaged social groups are

Steve McDonald

2004-01-01

414

The impact of clinical indications on visual search behaviour in skeletal radiographs  

NASA Astrophysics Data System (ADS)

The hazards associated with ionizing radiation have been documented in the literature and therefore justifying the need for X-ray examinations has come to the forefront of the radiation safety debate in recent years1. International legislation states that the referrer is responsible for the provision of sufficient clinical information to enable the justification of the medical exposure. Clinical indications are a set of systematically developed statements to assist in accurate diagnosis and appropriate patient management2. In this study, the impact of clinical indications upon fracture detection for musculoskeletal radiographs is analyzed. A group of radiographers (n=6) interpreted musculoskeletal radiology cases (n=33) with and without clinical indications. Radiographic images were selected to represent common trauma presentations of extremities and pelvis. Detection of the fracture was measured using ROC methodology. An eyetracking device was employed to record radiographers search behavior by analysing distinct fixation points and search patterns, resulting in a greater level of insight and understanding into the influence of clinical indications on observers' interpretation of radiographs. The influence of clinical information on fracture detection and search patterns was assessed. Findings of this study demonstrate that the inclusion of clinical indications result in impressionable search behavior. Differences in eye tracking parameters were also noted. This study also attempts to uncover fundamental observer search strategies and behavior with and without clinical indications, thus providing a greater understanding and insight into the image interpretation process. Results of this study suggest that availability of adequate clinical data should be emphasized for interpreting trauma radiographs.

Rutledge, A.; McEntee, M. F.; Rainford, L.; O'Grady, M.; McCarthy, K.; Butler, M. L.

2011-03-01

415

Gender Differences in Patterns of Searching the Web  

ERIC Educational Resources Information Center

There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

Roy, Marguerite; Chi, Michelene T. H.

2003-01-01

416

Interactions of visual odometry and landmark guidance during food search in honeybees.  

PubMed

How do honeybees use visual odometry and goal-defining landmarks to guide food search? In one experiment, bees were trained to forage in an optic-flow-rich tunnel with a landmark positioned directly above the feeder. Subsequent food-search tests indicated that bees searched much more accurately when both odometric and landmark cues were available than when only odometry was available. When the two cue sources were set in conflict, by shifting the position of the landmark in the tunnel during test, bees overwhelmingly used landmark cues rather than odometry. In another experiment, odometric cues were removed by training and testing in axially striped tunnels. The data show that bees did not weight landmarks as highly as when odometric cues were available, tending to search in the vicinity of the landmark for shorter periods. A third experiment, in which bees were trained with odometry but without a landmark, showed that a novel landmark placed anywhere in the tunnel during testing prevented bees from searching beyond the landmark location. Two further experiments, involving training bees to relatively longer distances with a goal-defining landmark, produced similar results to the initial experiment. One caveat was that, with the removal of the familiar landmark, bees tended to overshoot the training location, relative to the case where bees were trained without a landmark. Taken together, the results suggest that bees assign appropriate significance to odometric and landmark cues in a more flexible and dynamic way than previously envisaged. PMID:16244171

Vladusich, Tony; Hemmi, Jan M; Srinivasan, Mandyam V; Zeil, Jochen

2005-11-01

417

Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method  

NASA Astrophysics Data System (ADS)

Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

2013-05-01

418

Effect of a Concurrent Auditory Task on Visual Search Performance in a Driving-Related Image-Flicker Task  

Microsoft Academic Search

The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search

Christian M. Richard; Richard D. Wright; Cheryl Ee; Steven L. Prime; Yujiro Shimizu; John Vavrik

2002-01-01

419

Parietal substrates for dimensional effects in visual search: evidence from lesion-symptom mapping  

PubMed Central

In visual search, the detection of pop-out targets is facilitated when the target-defining dimension remains the same compared with when it changes across trials. We tested the brain regions necessary for these dimensional carry-over effects using a voxel-based morphometry study with brain-lesioned patients. Participants had to search for targets defined by either their colour (red or blue) or orientation (right- or left-tilted), and the target dimension either stayed the same or changed on consecutive trials. Twenty-five patients were categorized according to whether they showed an effect of dimensional change on search or not. The two groups did not differ with regard to their performance on several working memory tasks, and the dimensional carry-over effects were not correlated with working memory performance. With spatial, sustained attention and working memory deficits as well as lesion volume controlled, damage within the right inferior parietal lobule (the angular and supramarginal gyri) extending into the intraparietal sulcus was associated with an absence of dimensional carry-over (P < 0.001, cluster-level corrected for multiple comparisons). The data suggest that these regions of parietal cortex are necessary to implement attention shifting in the context of visual dimensional change. PMID:23404335

Humphreys, Glyn W.; Chechlacz, Magdalena

2013-01-01

420

Pattern drilling exploration: Optimum pattern types and hole spacings when searching for elliptical shaped targets  

USGS Publications Warehouse

In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.

Drew, L.J.

1979-01-01

421

Paying Attention: Being a Naturalist and Searching for Patterns.  

ERIC Educational Resources Information Center

Discusses the importance of recognizing patterns in nature to help understand the interactions of living and non-living things. Cautions the student not to lose sight of the details when studying the big picture. Encourages development of the ability to identify local species. Suggest two activities to strengthen observation skills and to help in…

Weisberg, Saul

1996-01-01

422

A probabilistic model for analysing the effect of performance levels on visual behaviour patterns of young sailors in simulated navigation.  

PubMed

The visual behaviour is a determining factor in sailing due to the influence of the environmental conditions. The aim of this research was to determine the visual behaviour pattern in sailors with different practice time in one star race, applying a probabilistic model based on Markov chains. The sample of this study consisted of 20 sailors, distributed in two groups, top ranking (n = 10) and bottom ranking (n = 10), all of them competed in the Optimist Class. An automated system of measurement, which integrates the VSail-Trainer® sail simulator and the Eye Tracking System(TM) was used. The variables under consideration were the sequence of fixations and the fixation recurrence time performed on each location by the sailors. The event consisted of one of simulated regatta start, with stable conditions of wind, competitor and sea. Results show that top ranking sailors perform a low recurrence time on relevant locations and higher on irrelevant locations while bottom ranking sailors make a low recurrence time in most of the locations. The visual pattern performed by bottom ranking sailors is focused around two visual pivots, which does not happen in the top ranking sailor's pattern. In conclusion, the Markov chains analysis has allowed knowing the visual behaviour pattern of the top and bottom ranking sailors and its comparison. PMID:25296294

Manzanares, Aarón; Menayo, Ruperto; Segado, Francisco; Salmerón, Diego; Cano, Juan Antonio

2015-04-01

423

An efficient technique for revealing visual search strategies with classification images.  

PubMed

We propose a novel variant of the classification image paradigm that allows us to rapidly reveal strategies used by observers in visual search tasks. We make use of eye tracking, 1/f noise, and a grid-like stimulus ensemble and also introduce a new classification taxonomy that distinguishes between foveal and peripheral processes. We tested our method for 3 human observers and two simple shapes used as search targets. The classification images obtained show the efficacy of the proposed method by revealing the features used by the observers in as few as 200 trials. Using two control experiments, we evaluated the use of naturalistic 1/f noise with classification images, in comparison with the more commonly used white noise, and compared the performance of our technique with that of an earlier approach without a stimulus grid. PMID:17515220

Tavassoli, Abtine; van der Linde, Ian; Bovik, Alan C; Cormack, Lawrence K

2007-01-01

424

On the origin of event-related potentials indexing covert attentional selection during visual search.  

PubMed

Despite nearly a century of electrophysiological studies recording extracranially from humans and intracranially from monkeys, the neural generators of nearly all human event-related potentials (ERPs) have not been definitively localized. We recorded an attention-related ERP component, known as the N2pc, simultaneously with intracranial spikes and local field potentials (LFPs) in macaques to test the hypothesis that an attentional-control structure, the frontal eye field (FEF), contributed to the generation of the macaque homologue of the N2pc (m-N2pc). While macaques performed a difficult visual search task, the search target was selected earliest by spikes from single FEF neurons, later by FEF LFPs, and latest by the m-N2pc. This neurochronometric comparison provides an empirical bridge connecting macaque and human experiments and a step toward localizing the neural generator of this important attention-related ERP component. PMID:19675287

Cohen, Jeremiah Y; Heitz, Richard P; Schall, Jeffrey D; Woodman, Geoffrey F

2009-10-01

425

Low target prevalence is a stubborn source of errors in visual search tasks  

PubMed Central

In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

2009-01-01

426

Two-Phase Pattern Search-based Learning Method for Multi-layer Neural Network  

NASA Astrophysics Data System (ADS)

A new multi-layer artificial neural network learning algorithm based on pattern search method is proposed. The learning model has two phases-a pattern search phase, and a local minimum-escaping phase. In the pattern search phase, our method performs local search iteratively and minimize the error measure function along with the set of descent directions of the error measure directly and finds the nearest minima efficiently. When the network gets stuck in local minima, the local minimum-escaping phase attempts to fill up the valley by modifying temperature parameters in ascent direction of the error measure. Thus, the two phases are repeated until the network gets out of local minima. The learning model is designed to provide a very simple and effective means of searching the minima of objective function directly without any knowledge of its derivatives. We test this algorithm on benchmark problems, such as exclusive-or (XOR), parity, Arabic numerals recognition, function approximation problems and a real world classification task. For all problems, the systems are shown be trained efficiently by our method. As a simple direct search method, it can be applied in hardware implementations easily.

Wang, Xugang; Tang, Zheng; Tamura, Hiroki; Ishii, Masahiro

427

WHIDE—a web tool for visual data mining colocation patterns in multivariate bioimages  

PubMed Central

Motivation: Bioimaging techniques rapidly develop toward higher resolution and dimension. The increase in dimension is achieved by different techniques such as multitag fluorescence imaging, Matrix Assisted Laser Desorption / Ionization (MALDI) imaging or Raman imaging, which record for each pixel an N-dimensional intensity array, representing local abundances of molecules, residues or interaction patterns. The analysis of such multivariate bioimages (MBIs) calls for new approaches to support users in the analysis of both feature domains: space (i.e. sample morphology) and molecular colocation or interaction. In this article, we present our approach WHIDE (Web-based Hyperbolic Image Data Explorer) that combines principles from computational learning, dimension reduction and visualization in a free web application. Results: We applied WHIDE to a set of MBI recorded using the multitag fluorescence imaging Toponome Imaging System. The MBI show field of view in tissue sections from a colon cancer study and we compare tissue from normal/healthy colon with tissue classified as tumor. Our results show, that WHIDE efficiently reduces the complexity of the data by mapping each of the pixels to a cluster, referred to as Molecular Co-Expression Phenotypes and provides a structural basis for a sophisticated multimodal visualization, which combines topology preserving pseudocoloring with information visualization. The wide range of WHIDE's applicability is demonstrated with examples from toponome imaging, high content screens and MALDI imaging (shown in the Supplementary Material). Availability and implementation: The WHIDE tool can be accessed via the BioIMAX website http://ani.cebitec.uni-bielefeld.de/BioIMAX/; Login: whidetestuser; Password: whidetest. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: tim.nattkemper@uni-bielefeld.de PMID:22390938

Kölling, Jan; Langenkämper, Daniel; Abouna, Sylvie; Khan, Michael; Nattkemper, Tim W.

2012-01-01

428

Serial, covert shifts of attention during visual search are reflected by the frontal eye fields and correlated with population oscillations.  

PubMed

Attention regulates the flood of sensory information into a manageable stream, and so understanding how attention is controlled is central to understanding cognition. Competing theories suggest visual search involves serial and/or parallel allocation of attention, but there is little direct, neural evidence for either mechanism. Two monkeys were trained to covertly search an array for a target stimulus under visual search (endogenous) and pop-out (exogenous) conditions. Here, we present neural evidence in the frontal eye fields (FEF) for serial, covert shifts of attention during search but not pop-out. Furthermore, attention shifts reflected in FEF spiking activity were correlated with 18-34 Hz oscillations in the local field potential, suggesting a "clocking" signal. This provides direct neural evidence that primates can spontaneously adopt a serial search strategy and that these serial covert shifts of attention are directed by the FEF. It also suggests that neuron population oscillations may regulate the timing of cognitive processing. PMID:19679077

Buschman, Timothy J; Miller, Earl K

2009-08-13

429

iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.  

PubMed

Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis. PMID:22656866

Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

2012-09-01

430

Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements  

PubMed Central

When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

Hout, Michael C.; Goldinger, Stephen D.

2011-01-01

431

Structator: fast index-based search for RNA sequence-structure patterns  

PubMed Central

Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640

2011-01-01

432

Orientation-selective adaptation to first- and second-order patterns in human visual cortex  

PubMed Central

Second-order textures – patterns that cannot be detected by mechanisms sensitive only to luminance changes – are ubiquitous in visual scenes, but the neuronal mechanisms mediating perception of such stimuli are not well understood. We used an adaptation protocol to measure neural activity in the human brain selective for the orientation of second-order textures. FMRI responses were measured in three subjects to presentations of first- and second-order probe gratings after adapting to a high-contrast first- or second-order grating that was either parallel or orthogonal to the probe gratings. First-order (LM) stimuli were generated by modulating the stimulus luminance. Second-order stimuli were generated by modulating the contrast (CM) or orientation (OM) of a first-order carrier. We used four combinations of adapter and probe stimuli: LM:LM, CM:CM, OM:OM, and LM:OM. The fourth condition tested for cross-modal adaptation with first-order adapter and second-order probe stimuli. Attention was diverted from the stimulus by a demanding task at fixation. Both first- and second-order stimuli elicited orientation-selective adaptation in multiple cortical visual areas, including V1, V2, V3, V3A/B, a newly identified visual area anterior to dorsal V3 which we have termed LO1, hV4, and VO1. For first-order stimuli (condition LM:LM), the adaptation was no larger in extrastriate areas than in V1, implying that the orientation-selective first-order (luminance) adaptation originated in V1. For second-order stimuli (conditions CM:CM and OM:OM), the magnitude of adaptation, relative to the absolute response magnitude, was significantly larger in VO1 (and for condition CM:CM, also in V3A/B and LO1) than in V1, suggesting that second-order stimulus orientation was extracted by additional processing after V1. There was little difference in the amplitude of adaptation between the second-order conditions. No consistent effect of adaptation was found in the cross-modal condition LM:OM, in agreement with psychophysical evidence for weak interactions between first- and second-order stimuli and computational models of separate mechanisms for first- and second-order visual processing. PMID:16221748

Larsson, Jonas; Landy, Michael S.; Heeger, David J.

2006-01-01

433

09/04/2006 02:34 PMJournal of Vision -Memory models of visual search searching in-the-head vs. in-the-world?, by Neth, Gray, & Myers Page 1 of 2http://www.journalofvision.org/5/8/417/Default.aspx  

E-print Network

09/04/2006 02:34 PMJournal of Vision - Memory models of visual search ­ searching in-the-head vs 1534-7362 Memory models of visual search ­ searching in-the-head vs. in-the-world? Hansjörg Neth Rensselaer Polytechnic Institute Abstract Visual search takes place whenever we are looking for something

Gray, Wayne

434

Accuracy of Using Visual Identification of White Sharks to Estimate Residency Patterns  

PubMed Central

Determining the residency of an aquatic species is important but challenging and it remains unclear what is the best sampling methodology. Photo-identification has been used extensively to estimate patterns of animals' residency and is arguably the most common approach, but it may not be the most effective approach in marine environments. To examine this, in 2005, we deployed acoustic transmitters on 22 white sharks (Carcharodon carcharias) in Mossel Bay, South Africa to quantify the probability of detecting these tagged sharks by photo-identification and different deployment strategies of acoustic telemetry equipment. Using the data collected by the different sampling approaches (detections from an acoustic listening station deployed under a chumming vessel versus those from visual sightings and photo-identification), we quantified the methodologies' probability of detection and determined if the sampling approaches, also including an acoustic telemetry array, produce comparable results for patterns of residency. Photo-identification had the lowest probability of detection and underestimated residency. The underestimation is driven by various factors primarily that acoustic telemetry monitors a large area and this reduces the occurrence of false negatives. Therefore, we propose that researchers need to use acoustic telemetry and also continue to develop new sampling approaches as photo-identification techniques are inadequate to determine residency. Using the methods presented in this paper will allow researchers to further refine sampling approaches that enable them to collect more accurate data that will result in better research and more informed management efforts and policy decisions. PMID:22514662

Delaney, David G.; Johnson, Ryan; Bester, Marthán N.; Gennari, Enrico

2012-01-01

435

Green fluorescent protein/beta-galactosidase double reporters for visualizing Drosophila gene expression patterns.  

PubMed

We characterized 120 novel yeast Ga14-targeted enhancer trap lines in Drosophila using upstream activating sequence (UAS) reporter plasmids incorporating newly constructed fusions of Aequorea victoria green fluorescent protein (GFP) and Escherichia coli beta-galactosidase genes. Direct comparisons of GFP epifluorescence and beta-galactosidase staining revealed that both proteins function comparably to their unconjugated counterparts within a wide variety of Drosophila tissues. Generally, both reporters accumulated in similar patterns within individual lines, but in some tissues, e.g., brain, GFP staining was more reliable than that of beta-galactosidase, whereas in other tissues, most notably tests and ovaries, the converse was true. In cases of weak enhancers, we occasionally could detect beta-galactosidase staining in the absence of discernible GFP fluorescence. This shortcoming of GFP can, in most cases, be alleviated by using the more efficient S65T GFP derivative. The GFP/beta-gal reporter fusion protein facilitated monitoring several aspects of protein accumulation. In particular, the ability to visualize GFP fluorescence enhances recognition of global static and dynamic patterns in live animals, whereas beta-galactosidase histochemistry affords sensitive high resolution protein localization. We present a catalog of Ga 14-expressing strains that will be useful for investigating several aspects of Drosophila melanogaster cell and developmental biology. PMID:9254908

Timmons, L; Becker, J; Barthmaier, P; Fyrberg, C; Shearn, A; Fyrberg, E

1997-01-01

436

Visualization on flow patterns during condensation of R410A in a vertical rectangular channel  

NASA Astrophysics Data System (ADS)

The visualization experiments on HFC R410A condensation in a vertical rectangular channel (14.34mm hydraulic diameter, 160mm length) were investigated. The flow patterns and heat transfer coefficients of condensation in the inlet region were presented in this paper. Better heat transfer performance can be obtained in the inlet region, and flow regime transition in other regions of the channel was also observed. Condensation experiments were carried out at different mass fluxes ( from 1.6 kg/h to 5.2 kg/h) and at saturation temperature 28°C. It was found that the flow patterns were mainly dominated by gravity at low mass fluxes. The effects of interfacial shear stress on condensate fluctuation are significant for the film condensation at higher mass flux in vertical flow, and consequently, the condensation heat transfer coefficient increases with the mass flux in the experimental conditions. The drop formation and growth process of condensation were also observed at considerably low refrigerant vapor flow rate.

Xu, Wenyun; Jia, Li

2014-06-01

437

Gene Expression Browser: large-scale and cross-experiment microarray data integration, management, search & visualization  

PubMed Central

Background In the last decade, a large amount of microarray gene expression data has been accumulated in public repositories. Integrating and analyzing high-throughput gene expression data have become key activities for exploring gene functions, gene networks and biological pathways. Effectively utilizing these invaluable microarray data remains challenging due to a lack of powerful tools to integrate large-scale gene-expression information across diverse experiments and to search and visualize a large number of gene-expression data points. Results Gene Expression Browser is a microarray data integration, management and processing system with web-based search and visualization functions. An innovative method has been developed to define a treatment over a control for every microarray experiment to standardize and make microarray data from different experiments homogeneous. In the browser, data are pre-processed offline and the resulting data points are visualized online with a 2-layer dynamic web display. Users can view all treatments over control that affect the expression of a selected gene via Gene View, and view all genes that change in a selected treatment over control via treatment over control View. Users can also check the changes of expression profiles of a set of either the treatments over control or genes via Slide View. In addition, the relationships between genes and treatments over control are computed according to gene expression ratio and are shown as co-responsive genes and co-regulation treatments over control. Conclusion Gene Expression Browser is composed of a set of software tools, including a data extraction tool, a microarray data-management system, a data-annotation tool, a microarray data-processing pipeline, and a data search & visualization tool. The browser is deployed as a free public web service (http://www.ExpressionBrowser.com) that integrates 301 ATH1 gene microarray experiments from public data repositories (viz. the Gene Expression Omnibus repository at the National Center for Biotechnology Information and Nottingham Arabidopsis Stock Center). The set of Gene Expression Browser software tools can be easily applied to the large-scale expression data generated by other platforms and in other species. PMID:20727159

2010-01-01

438

Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search  

PubMed Central

People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject groups showed similar target-relevant effects. These findings argue for an attentional deficit in ASD that is disproportionate for social stimuli, cannot be explained by low-level visual properties of the stimuli, and is more severe with high-load top-down task demands. Furthermore, this deficit appears to be independent of the amygdala, and not evident from general social bias independent of the target-directed search. PMID:25218953

Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

2015-01-01

439

Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face  

PubMed Central

The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers. PMID:25120420

Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

2014-01-01

440

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 19, NO. 5, MAY 2009 753 A Search Patterns Switching Algorithm for Block Motion Estimation  

E-print Network

depends on the accuracy of its motion content classification. In this paper, an adaptive search patterns A Search Patterns Switching Algorithm for Block Motion Estimation Ka-Ho Ng, Lai-Man Po, Ka-Man Wong, Chi-based gradient descent search and diamond search, can perform much better than coarse-to-fine search algorithms

Po, Lai-Man

441

Visual Search Strategies of Soccer Players Executing a Power vs. Placement Penalty Kick  

PubMed Central

Introduction When taking a soccer penalty kick, there are two distinct kicking techniques that can be adopted; a ‘power’ penalty or a ‘placement’ penalty. The current study investigated how the type of penalty kick being taken affected the kicker’s visual search strategy and where the ball hit the goal (end ball location). Method Wearing a portable eye tracker, 12 university footballers executed 2 power and placement penalty kicks, indoors, both with and without the presence of a goalkeeper. Video cameras were used to determine initial ball velocity and end ball location. Results When taking the power penalty, the football was kicked significantly harder and more centrally in the goal compared to the placement penalty. During the power penalty, players fixated on the football for longer and more often at the goalkeeper (and by implication the middle of the goal), whereas in the placement penalty, fixated longer at the goal, specifically the edges. Findings remained consistent irrespective of goalkeeper presence. Discussion/conclusion Findings indicate differences in visual search strategy and end ball location as a function of type of penalty kick. When taking the placement penalty, players fixated and kicked the football to the edges of the goal in an attempt to direct the ball to an area that the goalkeeper would have difficulty reaching and saving. Fixating significantly longer on the football when taking the power compared to placement penalty indicates a greater importance of obtaining visual information from the football. This can be attributed to ensuring accurate foot-to-ball contact and subsequent generation of ball velocity. Aligning gaze and kicking the football centrally in the goal when executing the power compared to placement penalty may have been a strategy to reduce the risk of kicking wide of the goal altogether. PMID:25517405

Timmis, Matthew A.; Turner, Kieran; van Paridon, Kjell N.

2014-01-01

442

A Performance Analysis of Evolutionary Pattern Search with Generalized Mutation Steps  

E-print Network

A Performance Analysis of Evolutionary Pattern Search with Generalized Mutation Steps William E performance of EPSAs. This paper re- visits that analysis and extends it to a more general model of mutation. We evaluate experimentally how the choice of the set of mutation offsets affects optimization perfor

Sadeh, Norman M.

443

Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents  

PubMed Central

Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is “weak central coherence”, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients. PMID:17240169

Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.

2007-01-01

444

Social and Non-Social Visual Attention Patterns and Associative Learning in Infants at Risk for Autism  

ERIC Educational Resources Information Center

Background: Social inattention is common in children with autism whereas associative learning capabilities are considered a relative strength. Identifying early precursors of impairment associated with autism could lead to earlier identification of this disorder. The present study compared social and non-social visual attention patterns as well as…

Bhat, A. N.; Galloway, J. C.; Landa, R. J.

2010-01-01

445

Twin-scale Vernier Micro-Pattern for Visual Measurement of 1D in-plane Absolute  

E-print Network

Twin-scale Vernier Micro-Pattern for Visual Measurement of 1D in-plane Absolute Displacements different periods in order to encode the period order within the phase difference observed between the two and precision of the optics and imager, by substrate vibrations, and by the quantum nature of light. Also

Paris-Sud XI, Université de

446

Age and distraction are determinants of performance on a novel visual search task in aged Beagle dogs.  

PubMed

Aging has been shown to disrupt performance on tasks that require intact visual search and discrimination abilities in human studies. The goal of the present study was to determine if canines show age-related decline in their ability to perform a novel simultaneous visual search task. Three groups of canines were included: a young group (N?=?10; 3 to 4.5 years), an old group (N?=?10; 8 to 9.5 years), and a senior group (N?=?8; 11 to 15.3 years). Subjects were first tested for their ability to learn a simple two-choice discrimination task, followed by the visual search task. Attentional demands in the task were manipulated by varying the number of distracter items; dogs received an equal number of trials with either zero, one, two, or three distracters. Performance on the two-choice discrimination task varied with age, with senior canines making significantly more errors than the young. Performance accuracy on the visual search task also varied with age; senior animals were significantly impaired compared to both the young and old, and old canines were intermediate in performance between young and senior. Accuracy decreased significantly with added distracters in all age groups. These results suggest that aging impairs the ability of canines to discriminate between task-relevant and -irrelevant stimuli. This is likely to be derived from impairments in cognitive domains such as visual memory and learning and selective attention. PMID:21336566

Snigdha, Shikha; Christie, Lori-Ann; De Rivera, Christina; Araujo, Joseph A; Milgram, Norton W; Cotman, Carl W

2012-02-01

447

In primate visual area V2, histochemical staining for cytochrome oxidase (CO) reveals a tripartite pattern of densely labeled thick and  

E-print Network

In primate visual area V2, histochemical staining for cytochrome oxidase (CO) reveals a tripartite of visual areas. Here, we studied the overall pattern of CO stripes in V2 of the macaque monkey, using parallel to the anterior border of V2. These differences imply an asymmetry in how the visual field maps

Van Essen, David

448

Search of Possible Triggered Seismicity Patterns of Northern Tien Shan  

NASA Astrophysics Data System (ADS)

Statistical analysis of the Northern Tien Shan seismicity was performed considering the possible triggering impacts of natural and man-made mechanical and electromagnetic factors on seismic activity. Strong distant earthquakes, lunar-solar tides, and magnetic storms are considered as natural triggering factors. The man-made factors include underground nuclear explosions (UNE) and electromagnetic impacts provided by high-power magnetohydrodynamic pulsed (MHD) generators. The representative local earthquake catalog of the region under study (41°-46° N, 74°-82° E) includes 15577 events of M>1.67 from 1975 to 2000. Within this time period 330 UNE and 109 firing runs of MHD generators, which are considered as the possible man-made earthquake triggering factors, have been performed within or adjacent to the analyzed region. Various statistical methods (cross-correlation, spectral analysis, RTL-analysis, etc.) were employed. For the used statement of problem and applied initial data the statistically significant patterns of triggered seismicity of the Northern Tien-Shan due to impacts of UNE and MHD generators were not found. Large common periods of seismicity variation for time series of distant strong earthquakes and local seismic events were selected. There is significant number of common periods (7, 9, 14, 28, 186, and 16384 days) for variation of z-component of the earth tide and release of seismic energy that may point to an influence of the earth tides on the local seismicity.

Novikov, V.; Vorontsova, E.

2007-12-01

449

Iso-orientation domains in cat visual cortex are arranged in pinwheel-like patterns.  

PubMed

The mammalian cortex is organized in a columnar fashion: neurons lying below each other from the pia to the white matter usually share many functional properties. Across the cortical surface, cells with similar response properties are also clustered together, forming elongated bands or patches. Some response properties, such as orientation preference in the visual cortex, change gradually across the cortical surface forming 'orientation maps'. To determine the precise layout of iso-orientation domains, knowledge of responses not only to one but to many stimulus orientations is essential. Therefore, the exact depiction of orientation maps has been hampered by technical difficulties and remained controversial for almost thirty years. Here we use in vivo optical imaging based on intrinsic signals to gather information on the responses of a piece of cortex to gratings in many different orientations. This complete set of responses then provides detailed information on the structure of the orientation map in a large patch of cortex from area 18 of the cat. We find that cortical regions that respond best to one orientation form highly ordered patches rather than elongated bands. These iso-orientation patches are organized around 'orientation centres', producing pinwheel-like patterns in which the orientation preference of cells is changing continuously across the cortex. We have also analysed our data for fast changes in orientation preference and find that these 'fractures' are limited to the orientation centres. The pinwheels and orientation centres are such a prominent organizational feature that it should be important to understand their development as well as their function in the processing of visual information. PMID:1896085

Bonhoeffer, T; Grinvald, A

1991-10-01

450

Visualization.  

ERIC Educational Resources Information Center

Discusses the nature and role of visualization in the study of mathematics. Also includes a list of seven broad methods (often referred to as the design spectrum) by which individuals solve problems. (JN)

Sharma, Mahesh C.

1985-01-01

451

Separability of abstract-category and specific-exemplar visual object subsystems: Evidence from fMRI pattern analysis.  

PubMed

Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436

McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J

2015-02-01

452

Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions  

ERIC Educational Resources Information Center

Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

2011-01-01

453

Pattern recognition-assisted infrared library searching of automotive clear coats.  

PubMed

Pattern recognition techniques have been developed to search the infrared (IR) spectral libraries of the paint data query (PDQ) database to differentiate between similar but nonidentical IR clear coat paint spectra. The library search system consists of two separate but interrelated components: search prefilters to reduce the size of the IR library to a specific assembly plant or plants corresponding to the unknown paint sample and a cross-correlation searching algorithm to identify IR spectra most similar to the unknown in the subset of spectra identified by the prefilters. To develop search prefilters with the necessary degree of accuracy, IR spectra from the PDQ database were preprocessed using wavelets to enhance subtle but significant features in the data. Wavelet coefficients characteristic of the assembly plant of the vehicle were identified using a genetic algorithm for pattern recognition and feature selection. A search algorithm was then used to cross-correlate the unknown with each IR spectrum in the subset of library spectra identified by the search prefilters. Each cross-correlated IR spectrum was simultaneously compared to an autocorrelated IR spectrum of the unknown using several spectral windows that span different regions of the cross-correlated and autocorrelated data from the midpoint. The top five hits identified in each search window are compiled, and a histogram is computed that summarizes the frequency of occurrence for each selected library sample. The five library samples with the highest frequency of occurrence are selected as potential hits. Even in challenging trials where the clear coat paint samples evaluated were all the same make (e.g., General Motors) within a limited production year range, the model of the automobile from which the unknown paint sample was obtained could be identified from its IR spectrum. PMID:25506887

Fasasi, Ayuba; Mirjankar, Nikhil; Stoian, Razvan-Ionut; White, Collin; Allen, Matthew; Sandercock, Mark P; Lavine, Barry K

2015-01-01

454

Effect of sevoflurane concentration on visual evoked potentials with pattern stimulation in dogs.  

PubMed

The purpose of this study was to investigate the effects of sevoflurane concentration on canine visual evoked potentials with pattern stimulation (P-VEPs). Six clinically normal laboratory-beagle dogs were used. The minimum alveolar concentration (MAC) of sevoflurane was detected from all subjects by tail clamp method. The refractive power of the right eyes of all subjects was corrected to -2 diopters after skiascopy. For P-VEP recording, the recording and reference electrode were positioned at inion and nasion, respectively, and the earth electrode was positioned on the inner surface. To grasp the state of CNS suppression objectively, the bispectral index (BIS) value was used. The stimulus pattern size and distance for VEP recording were constant, 50.3 arc-min and 50 cm, respectively. P-VEPs and BIS values were recorded under sevoflurane in oxygen inhalational anesthesia at 0.5, 1.0, 1.5, 2.0, 2.5 and 2.75 sevoflurane MAC. For analysis of P-VEP, the P100 implicit time and N75-P100 amplitude were estimated. P-VEPs were detected at 0.5 to 1.5 MAC in all dogs, and disappeared at 2.0 MAC in four dogs and at 2.5 and 2.75 MAC in one dog each. The BIS value decreased with increasing sevoflurane MAC, and burst suppression began to appear from 1.5 MAC. There was no significant change in P100 implicit time and N75-P100 amplitude with any concentration of sevoflurane. At concentrations around 1.5 MAC, which are used routinely to immobilize dogs, sevoflurane showed no effect on P-VEP. PMID:25373729

Ito, Yosuke; Maehara, Seiya; Itoh, Yoshiki; Hayashi, Miri; Kubo, Akira; Itami, Takaharu; Ishizuka, Tomohito; Tamura, Jun; Yamashita, Kazuto

2014-11-01

455

Probability cueing influences miss rate and decision criterion in visual searches  

PubMed Central

In visual search tasks, the ratio of target-present to target-absent trials has an important effect on miss rates. The low prevalence effect indicates that we are more likely to miss a target when it occurs rarely rather than frequently. In this study, we examined whether probability cueing modulates the miss rate and the observer's criterion. The results indicated that probability cueing affects miss rates, the average observer's criterion, and reaction time for target-absent trials. These results clearly demonstrate that probability cueing modulates two parameters (i.e., the decision criterion and the quitting threshold) and produces a low prevalence effect. Taken together, the current study and previous studies suggest that the miss rate is not just affected by global prevalence; it is also affected by probability cueing. PMID:25469223

Ishibashi, Kazuya; Kita, Shinichi

2014-01-01

456

Visual Search Strategies of Tag Clouds - Results from an Eyetracking Study  

NASA Astrophysics Data System (ADS)

Tag clouds have become a frequently used interaction technique in the web in the past couple of years. Research has shown the influence of variables such as tag size and location on the perception of tag clouds. However, several questions remain unclear. First, little is know on how tag clouds are perceived visually and which search strategies users apply when looking for tags in a tag cloud. Second, there are variables, especially tag location, were prior work comes to conflicting results. Third, several approaches to present tag clouds with the tags semantically clustered have been proposed recently. However, it remains unclear which effects these new approaches have on the perception of tag clouds. In this paper we report the results of an extensive study on the perception of tag clouds using eye tracking technology that allows answering these questions.

Schrammel, Johann; Deutsch, Stephanie; Tscheligi, Manfred

457

HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs  

NASA Astrophysics Data System (ADS)

Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds. This allows the user to quickly explore each image, ascertain the main units of surface material, localize outliers, and develop an understanding of the various materials' spectral characteristics. The HSIFind software suite is currently in beta testing at the Planetary Science Institute and a process is underway to release it under an open source license to the broader community. We believe it will benefit instrument operations during remote planetary exploration, where tactical mission decisions demand rapid analysis of each new dataset. The approach also holds potential for public spectral catalogs where its shallow learning curve and portability can make these datasets accessible to a much wider range of researchers. Acknowledgements: The HSIFind project acknowledges the NASA Advanced MultiMission Operating System (AMMOS) and the Multimission Ground Support Services (MGSS). E. Palmer is with the Planetary Science Institute, Tucson, AZ. Other authors are with the Jet Propulsion Laboratory, Pasadena, CA. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Copyright 2013, California Institute of Technology.

Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

2013-12-01

458

Patterns of ongoing activity and the functional architecture of the primary visual cortex.  

PubMed

Ongoing spontaneous activity in the cerebral cortex exhibits complex spatiotemporal patterns in the absence of sensory stimuli. To elucidate the nature of this ongoing activity, we present a theoretical treatment of two contrasting scenarios of cortical dynamics: (1) fluctuations about a single background state and (2) wandering among multiple "attractor" states, which encode a single or several stimulus features. Studying simplified network rate models of the primary visual cortex (V1), we show that the single state scenario is characterized by fast and high-dimensional Gaussian-like fluctuations, whereas in the multiple state scenario the fluctuations are slow, low dimensional, and highly non-Gaussian. Studying a more realistic model that incorporates correlations in the feed-forward input, spatially restricted cortical interactions, and an experimentally derived layout of pinwheels, we show that recent optical-imaging data of ongoing activity in V1 are consistent with the presence of either a single background state or multiple attractor states encoding many features. PMID:15134644

Goldberg, Joshua A; Rokni, Uri; Sompolinsky, Haim

2004-05-13

459

Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism  

ERIC Educational Resources Information Center

The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

2011-01-01

460

Specificity of task constraints and effects of visual demonstrations and verbal instructions in directing learners' search during skill acquisition.  

PubMed

In the present study, the efficacy of visual demonstrations and verbal instructions as instructional constraints on the acquisition of movement coordination was investigated. Fifteen participants performed an aiming task on 100 acquisition and 20 retention trials, under 1 of 3 conditions: a modeling group (MG), a verbally directed group (VDG), and a control group (CG). The MG observed a model intermittently throughout acquisition, whereas the VDG was verbally instructed to use the model's movement pattern. Participants in the CG received neither form of instruction. Kinematic analysis revealed that compared with verbal instructions or no instructions, visual demonstrations significantly improved participants' approximation of the model's coordination pattern. No differences were found in movement outcomes. Coordination data supported the visual perception perspective on observational learning, whereas outcome data suggested that the modeling effect is mainly a function of task constraints, that is, the novelty of a movement pattern. PMID:11495834

Al-Abood, S A; Davids, K F; Bennett, S J

2001-09-01

461

The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search  

Microsoft Academic Search

The ability to efficiently search the visual environment is a critical function of the visual system, and recent research has shown that experience playing action video games can influence visual selective attention. The present research examined the similarities and differences between video game players (VGPs) and non-video game players (NVGPs) in terms of the ability to inhibit attention from returning

Alan D. Castel; Jay Pratt; Emily Drummond

2005-01-01

462

Sleep and rest facilitate implicit memory in a visual search task S.C. Mednick a,*, T. Makovski b  

E-print Network

sleep, specifically rapid eye movement (REM) sleep (Karni, Tanne, RubSleep and rest facilitate implicit memory in a visual search task S.C. Mednick a,*, T. Makovski b in revised form 7 April 2009 Available online xxxx Keywords: Contextual cueing Sleep Naps Implicit memory

Mednick, Sara C.

463

The contribution of coping-related variables and heart rate variability to visual search performance under pressure.  

PubMed

Visual search performance under pressure is explored within the predictions of the neurovisceral integration model. The experimental aims of this study were: 1) to investigate the contribution of coping-related variables to baseline, task, and reactivity (task-baseline) high-frequency heart rate variability (HF-HRV), and 2) to investigate the contribution of coping-related variables and HF-HRV to visual search performance under pressure. Participants (n=96) completed self-report measures of coping-related variables (emotional intelligence, coping style, perceived stress intensity, perceived control of stress, coping effectiveness, challenge and threat, and attention strategy) and HF-HRV was measured during a visual search task under pressure. The data show that baseline HF-HRV was predicted by a trait coping-related variable, task HF-HRV was predicted by a combination of trait and state coping-related variables, and reactivity HF-HRV was predicted by a state coping-related variable. Visual search performance was predicted by coping-related variables but not by HF-HRV. PMID:25481358

Laborde, Sylvain; Lautenbach, Franziska; Allen, Mark S

2015-02-01

464

Age-Related Occipito-Temporal Hypoactivation during Visual Search: Relationships between mN2pc Sources and Performance  

ERIC Educational Resources Information Center

Recently, an event-related potential (ERP) study (Lorenzo-Lopez et al., 2008) provided evidence that normal aging significantly delays and attenuates the electrophysiological correlate of the allocation of visuospatial attention (N2pc component) during a feature-detection visual search task. To further explore the effects of normal aging on the…

Lorenzo-Lopez, L.; Gutierrez, R.; Moratti, S.; Maestu, F.; Cadaveira, F.; Amenedo, E.

2011-01-01

465

Practice Makes Improvement: How Adults with Autism Out-Perform Others in a Naturalistic Visual Search Task  

ERIC Educational Resources Information Center

People with autism spectrum disorder (ASD) often exhibit superior performance in visual search compared to others. However, most studies demonstrating this advantage have employed simple, uncluttered images with fully visible targets. We compare the performance of high-functioning adults with ASD and matched controls on a naturalistic luggage…

Gonzalez, Cleotilde; Martin, Jolie M.; Minshew, Nancy J.; Behrmann, Marlene

2013-01-01

466