Science.gov

Sample records for visual search patterns

  1. Effect of mammographic breast density on radiologists' visual search pattern

    NASA Astrophysics Data System (ADS)

    Al Mousa, Dana S.; Brennan, Patrick C.; Ryan, Elaine A.; Lee, Warwick B.; Pietrzyk, Mariusz W.; Reed, Warren M.; Alakhras, Maram M.; Li, Yanpeng; Mello-Thoms, Claudia

    2014-03-01

    This study investigates the impact of breast density on visual searching pattern. A set of 74 one-view malignancy containing mammographic images were examined by 7 radiologists. Eye position was recorded and visual search parameters such as total time examining a case, time to hit the lesion, dwell time and number of hits per area were collected. Fixations were calculated in 3 areas of interests: background breast parenchyma, dense areas of parenchyma and lesion. Significant increases in dwell time and number of hits in dense areas of parenchyma were noted for highcompared to low- mammographic density images when the lesion overlay the fibroglandular tissue (p<0.01). When the lesion was outside the fibroglandular tissue, significant increase in dwell time and number of hits in dense areas of parenchyma in high- compared to low- mammographic density images were observed (p<0.01). No significant differences have been found in total time examining a case, time to first fixate the lesion, dwell time and number of hits in background breast parenchyma and lesion areas. In conclusion, our data suggests that dense areas of breast parenchyma attract radiologists' visual attention. Lesions overlaying the fibroglandular tissue were detected faster, therefore lesion location, whether overlaying or outside the fibroglandular tissue, appeared to have an impact on radiologists' visual searching pattern.

  2. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  3. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  4. Collaboration during visual search.

    PubMed

    Malcolmson, Kelly A; Reynolds, Michael G; Smilek, Daniel

    2007-08-01

    Two experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. We compared the search performance of individuals working together (collaborative pairs) with the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed even when the presence of another individual was matched across pairs. The results are discussed in the context of task-sharing, social loafing and current theories of visual search. PMID:17972737

  5. Understanding visual search patterns of dermatologists assessing pigmented skin lesions before and after online training.

    PubMed

    Krupinski, Elizabeth A; Chao, Joseph; Hofmann-Wellenhof, Rainer; Morrison, Lynne; Curiel-Lewandrowski, Clara

    2014-12-01

    The goal of this investigation was to explore the feasibility of characterizing the visual search characteristics of dermatologists evaluating images corresponding to single pigmented skin lesions (PSLs) (close-ups and dermoscopy) as a venue to improve training programs for dermoscopy. Two Board-certified dermatologists and two dermatology residents participated in a phased study. In phase I, they viewed a series of 20 PSL cases ranging from benign nevi to melanoma. The close-up and dermoscopy images of the PSL were evaluated sequentially and rated individually as benign or malignant, while eye position was recorded. Subsequently, the participating subjects completed an online dermoscopy training module that included a pre- and post-test assessing their dermoscopy skills (phase 2). Three months later, the subjects repeated their assessment on the 20 PSLs presented during phase I of the study. Significant differences in viewing time and eye-position parameters were observed as a function of level of expertise. Dermatologists overall have more efficient search than residents generating fewer fixations with shorter dwells. Fixations and dwells associated with decisions changing from benign to malignant or vice versa from photo to dermatoscopic viewing were longer than any other decision, indicating increased visual processing for those decisions. These differences in visual search may have implications for developing tools to teach dermatologists and residents about how to better utilize dermoscopy in clinical practice. PMID:24939005

  6. Visual search: a retrospective.

    PubMed

    Eckstein, Miguel P

    2011-01-01

    Visual search, a vital task for humans and animals, has also become a common and important tool for studying many topics central to active vision and cognition ranging from spatial vision, attention, and oculomotor control to memory, decision making, and rewards. While visual search often seems effortless to humans, trying to recreate human visual search abilities in machines has represented an incredible challenge for computer scientists and engineers. What are the brain computations that ensure successful search? This review article draws on efforts from various subfields and discusses the mechanisms and strategies the brain uses to optimize visual search: the psychophysical evidence, their neural correlates, and if unknown, possible loci of the neural computations. Mechanisms and strategies include use of knowledge about the target, distractor, background statistical properties, location probabilities, contextual cues, scene context, rewards, target prevalence, and also the role of saliency, center-surround organization of search templates, and eye movement plans. I provide overviews of classic and contemporary theories of covert attention and eye movements during search explaining their differences and similarities. To allow the reader to anchor some of the laboratory findings to real-world tasks, the article includes interviews with three expert searchers: a radiologist, a fisherman, and a satellite image analyst. PMID:22209816

  7. Introspection during visual search.

    PubMed

    Reyes, Gabriel; Sackur, Jérôme

    2014-10-01

    Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. PMID:25286130

  8. Interrupted Visual Searches Reveal Volatile Search Memory

    ERIC Educational Resources Information Center

    Shen, Y. Jeremy; Jiang, Yuhong V.

    2006-01-01

    This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…

  9. Search for correlations between genotypes and electrophysiological patterns in migraine: the MTHFR C677T polymorphism and visual evoked potentials.

    PubMed

    Magis, D; Allena, M; Coppola, G; Di Clemente, L; Gérard, P; Schoenen, J

    2007-10-01

    Interictally, migraineurs have on average a reduction in habituation of pattern-reversal visual evoked potentials (PR-VEP) and in mitochondrial energy reserve. 5,10-Methylenetetrahydrofolate reductase (MTHFR) is involved in folate metabolism and its C677T polymorphism may be more prevalent in migraine. The aim of this study was to search in migraineurs for a correlation between the MTHFR C677T polymorphism and the PR-VEP profile. PR-VEP were recorded in 52 genotyped migraine patients: 40 female, 24 without (MoA), 28 with aura (MA). Among them 21 had a normal genotype (CC), 18 were heterozygous (CT) and 13 homozygous (TT) for the MTHFR C677T polymorphism. Mean PR-VEP N1-P1 amplitude was significantly lower in CT compared with CC, and tended to be lower in TT with increasing age. The habituation deficit was significantly greater in CC compared with TT subjects. The correlation between the cortical preactivation level, as reflected by the VEP amplitude in the first block of averages, and habituation was stronger in CC than in CT or TT. The MTHFR C677T polymorphism could thus have an ambiguous role in migraine. On one hand, the better VEP habituation which is associated with its homozygosity, and possibly mediated by homocysteine derivatives increasing serotoninergic transmission, may protect the brain against overstimulation. On the other hand, MTHFR C677T homozygosity is linked to a reduction of grand average VEP amplitude with illness duration, which has been attributed to brain damage. PMID:17711493

  10. Supporting Web Search with Visualization

    NASA Astrophysics Data System (ADS)

    Hoeber, Orland; Yang, Xue Dong

    One of the fundamental goals of Web-based support systems is to promote and support human activities on the Web. The focus of this Chapter is on the specific activities associated with Web search, with special emphasis given to the use of visualization to enhance the cognitive abilities of Web searchers. An overview of information retrieval basics, along with a focus on Web search and the behaviour of Web searchers is provided. Information visualization is introduced as a means for supporting users as they perform their primary Web search tasks. Given the challenge of visualizing the primarily textual information present in Web search, a taxonomy of the information that is available to support these tasks is given. The specific challenges of representing search information are discussed, and a survey of the current state-of-the-art in visual Web search is introduced. This Chapter concludes with our vision for the future of Web search.

  11. Learning in repeated visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance. PMID:20601709

  12. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  13. Visual Search of Mooney Faces.

    PubMed

    Goold, Jessica E; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  14. Visual Search of Mooney Faces

    PubMed Central

    Goold, Jessica E.; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  15. Visual search in virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.; Ezumi, Koji; Nguyen, Tho; Paul, R.; Tharp, Gregory K.; Yamashita, H. I.

    1992-08-01

    A key task in virtual environments is visual search. To obtain quantitative measures of human performance and documentation of visual search strategies, we have used three experimental arrangements--eye, head, and mouse control of viewing windows--by exploiting various combinations of helmet-mounted-displays, graphics workstations, and eye movement tracking facilities. We contrast two different categories of viewing strategies: one, for 2D pictures with large numbers of targets and clutter scattered randomly; the other for quasi-natural 3D scenes with targets and non-targets placed in realistic, sensible positions. Different searching behaviors emerge from these contrasting search conditions, reflecting different visual and perceptual modes. A regular 'searchpattern' is a systematic, repetitive, idiosyncratic sequence of movements carrying the eye to cover the entire 2D scene. Irregular 'searchpatterns' take advantages of wide windows and the wide human visual lobe; here, hierarchical detection and recognition is performed with the appropriate capabilities of the 'two visual systems'. The 'searchpath', also efficient, repetitive and idiosyncratic, provides only a small set of fixations to check continually the smaller number of targets in the naturalistic 3D scene; likely, searchpaths are driven by top-down spatial models. If the viewed object is known and able to be named, then an hypothesized, top-down cognitive model drives active looking in the 'scanpath' mode, again continually checking important subfeatures of the object. Spatial models for searchpaths may be primitive predecessors, in the evolutionary history of animals, of cognitive models for scanpaths.

  16. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  17. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  18. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed

  19. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of the…

  20. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  1. Aurally and visually guided visual search in a virtual environment.

    PubMed

    Flanagan, P; McAnally, K I; Martin, R L; Meehan, J W; Oldfield, S R

    1998-09-01

    We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information. PMID:9849104

  2. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  3. Visual Search Across the Life Span

    ERIC Educational Resources Information Center

    Hommel, Bernhard; Li, Karen Z. H.; Li, Shu-Chen

    2004-01-01

    Gains and losses in visual search were studied across the life span in a representative sample of 298 individuals from 6 to 89 years of age. Participants searched for single-feature and conjunction targets of high or low eccentricity. Search was substantially slowed early and late in life, age gradients were more pronounced in conjunction than in…

  4. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  5. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  6. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  7. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  8. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  9. Perceptual Encoding Efficiency in Visual Search

    ERIC Educational Resources Information Center

    Rauschenberger, Robert; Yantis, Steven

    2006-01-01

    The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

  10. Automatization and training in visual search.

    PubMed

    Czerwinski, M; Lightfoot, N; Shiffrin, R M

    1992-01-01

    In several search tasks, the amount of practice on particular combinations of targets and distractors was equated in varied-mapping (VM) and consistent-mapping (CM) conditions. The results indicate the importance of distinguishing between memory and visual search tasks, and implicate a number of factors that play important roles in visual search and its learning. Visual search was studied in Experiment 1. VM and CM performance were almost equal, and slope reductions occurred during practice for both, suggesting the learning of efficient attentive search based on features, and no important role for automatic attention attraction. However, positive transfer effects occurred when previous CM targets were re-paired with previous CM distractors, even though these targets and distractors had not been trained together. Also, the introduction of a demanding simultaneous task produced advantages of CM over VM. These latter two results demonstrated the operation of automatic attention attraction. Visual search was further studied in Experiment 2, using novel characters for which feature overlap and similarity were controlled. The design and many of the findings paralleled Experiment 1. In addition, enormous search improvement was seen over 35 sessions of training, suggesting the operation of perceptual unitization for the novel characters. Experiment 3 showed a large, persistent advantage for CM over VM performance in memory search, even when practice on particular combinations of targets and distractors was equated in the two training conditions. A multifactor theory of automatization and attention is put forth to account for these findings and others in the literature. PMID:1621883

  11. The Search for Optimal Visual Stimuli

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    In 1983, Watson, Barlow and Robson published a brief report in which they explored the relative visibility of targets that varied in size, shape, spatial frequency, speed, and duration (referred to subsequently here as WBR). A novel aspect of that paper was that visibility was quantified in terms of threshold contrast energy, rather than contrast. As they noted, this provides a more direct measure of the efficiency with which various patterns are detected, and may be more edifying as to the underlying detection machinery. For example, under certain simple assumptions, the waveform of the most efficiently detected signal is an estimate of the receptive field of the visual system's most efficient detector. Thus one goal of their experiment Basuto search for the stimulus that the 'eye sees best'. Parenthetically, the search for optimal stimuli may be seen as the most general and sophisticated variant of the traditional 'subthreshold summation' experiment, in which one measures the effect upon visibility of small probes combined with a base stimulus.

  12. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  13. Visual search for faces with emotional expressions.

    PubMed

    Frischen, Alexandra; Eastwood, John D; Smilek, Daniel

    2008-09-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the observer? The authors argue that the evidence is consistent with claims that (a) preattentive search processes are sensitive to and influenced by facial expressions of emotion, (b) attention guidance is influenced by a dynamic interplay of emotional and perceptual factors, and (c) visual search for emotional faces is influenced by the emotional state of the observer to some extent. The authors also argue that the way in which contextual factors interact to determine search performance needs to be explored further to draw sound conclusions about the precise influence of emotional expressions on search efficiency. Methodological considerations (e.g., set size, distractor background, task set) and ecological limitations of the visual search task are discussed. Finally, specific recommendations are made for future research directions. PMID:18729567

  14. Driving forces in free visual search: An ethology.

    PubMed

    MacInnes, W Joseph; Hunt, Amelia R; Hilchey, Matthew D; Klein, Raymond M

    2014-02-01

    Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses. PMID:24385137

  15. Visual search under scotopic lighting conditions.

    PubMed

    Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M; Geisler, Wilson S; Gegenfurtner, Karl R

    2015-08-01

    When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases. PMID:25988753

  16. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  17. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  18. Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search

    PubMed Central

    Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.

    2012-01-01

    Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511

  19. Parallel and serial processes in visual search.

    PubMed

    Thornton, Thomas L; Gilden, David L

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue (e.g., J. M. Wolfe, 1998a, 1998b). However, the results remain largely inconclusive because the methodologies that have historically been used cannot make the necessary distinctions (J. Palmer, 1995; J. T. Townsend, 1972, 1974, 1990). In this article, the authors develop a rigorous procedure for deciding the scheduling problem in visual search by making improvements in both search methodology and data interpretation. The search method, originally used by A. H. C. van der Heijden (1975), generalizes the traditional single-target methodology by permitting multiple targets. Reaction times and error rates from 29 representative search studies were analyzed using Monte Carlo simulation. Parallel and serial models of attention were defined by coupling the appropriate sequential sampling algorithms to realistic constraints on decision making. The authors found that although most searches are conducted by a parallel limited-capacity process, there is a distinguishable search class that is serial. PMID:17227182

  20. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  1. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  2. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  3. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  4. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  5. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  6. Personalized online information search and visualization

    PubMed Central

    Chen, Dongquan; Orthner, Helmuth F; Sell, Susan M

    2005-01-01

    Background The rapid growth of online publications such as the Medline and other sources raises the questions how to get the relevant information efficiently. It is important, for a bench scientist, e.g., to monitor related publications constantly. It is also important, for a clinician, e.g., to access the patient records anywhere and anytime. Although time-consuming, this kind of searching procedure is usually similar and simple. Likely, it involves a search engine and a visualization interface. Different words or combination reflects different research topics. The objective of this study is to automate this tedious procedure by recording those words/terms in a database and online sources, and use the information for an automated search and retrieval. The retrieved information will be available anytime and anywhere through a secure web server. Results We developed such a database that stored searching terms, journals and et al., and implement a piece of software for searching the medical subject heading-indexed sources such as the Medline and other online sources automatically. The returned information were stored locally, as is, on a server and visible through a Web-based interface. The search was performed daily or otherwise scheduled and the users logon to the website anytime without typing any words. The system has potentials to retrieve similarly from non-medical subject heading-indexed literature or a privileged information source such as a clinical information system. The issues such as security, presentation and visualization of the retrieved information were thus addressed. One of the presentation issues such as wireless access was also experimented. A user survey showed that the personalized online searches saved time and increased and relevancy. Handheld devices could also be used to access the stored information but less satisfactory. Conclusion The Web-searching software or similar system has potential to be an efficient tool for both bench scientists and clinicians for their daily information needs. PMID:15766382

  7. Parallel Mechanisms for Visual Search in Zebrafish

    PubMed Central

    Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

    2014-01-01

    Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

  8. Innate visual learning through spontaneous activity patterns.

    PubMed

    Albert, Mark V; Schnabel, Adam; Field, David J

    2008-01-01

    Patterns of spontaneous activity in the developing retina, LGN, and cortex are necessary for the proper development of visual cortex. With these patterns intact, the primary visual cortices of many newborn animals develop properties similar to those of the adult cortex but without the training benefit of visual experience. Previous models have demonstrated how V1 responses can be initialized through mechanisms specific to development and prior to visual experience, such as using axonal guidance cues or relying on simple, pairwise correlations on spontaneous activity with additional developmental constraints. We argue that these spontaneous patterns may be better understood as part of an "innate learning" strategy, which learns similarly on activity both before and during visual experience. With an abstraction of spontaneous activity models, we show how the visual system may be able to bootstrap an efficient code for its natural environment prior to external visual experience, and we continue the same refinement strategy upon natural experience. The patterns are generated through simple, local interactions and contain the same relevant statistical properties of retinal waves and hypothesized waves in the LGN and V1. An efficient encoding of these patterns resembles a sparse coding of natural images by producing neurons with localized, oriented, bandpass structure-the same code found in early visual cortical cells. We address the relevance of higher-order statistical properties of spontaneous activity, how this relates to a system that may adapt similarly on activity prior to and during natural experience, and how these concepts ultimately relate to an efficient coding of our natural world. PMID:18670593

  9. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  10. A Visual Search Tool for Early Elementary Science Students.

    ERIC Educational Resources Information Center

    Revelle, Glenda; Druin, Allison; Platner, Michele; Bederson, Ben; Hourcade, Juan Pablo; Sherman, Lisa

    2002-01-01

    Reports on the development of a visual search interface called "SearchKids" to support children ages 5-10 years in their efforts to find animals in a hierarchical information structure. Investigates whether children can construct search queries to conduct complex searches if sufficiently supported both visually and conceptually. (Contains 27…

  11. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  12. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  13. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology. PMID:19004680

  14. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines. PMID:26356887

  15. Adaptation and visual search in mammographic images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2015-05-01

    Radiologists face the visually challenging task of detecting suspicious features within the complex and noisy backgrounds characteristic of medical images. We used a search task to examine whether the salience of target features in x-ray mammograms could be enhanced by prior adaptation to the spatial structure of the images. The observers were not radiologists, and thus had no diagnostic training with the images. The stimuli were randomly selected sections from normal mammograms previously classified with BIRADS Density scores of "fatty" versus "dense," corresponding to differences in the relative quantities of fat versus fibroglandular tissue. These categories reflect conspicuous differences in visual texture, with dense tissue being more likely to obscure lesion detection. The targets were simulated masses corresponding to bright Gaussian spots, superimposed by adding the luminance to the background. A single target was randomly added to each image, with contrast varied over five levels so that they varied from difficult to easy to detect. Reaction times were measured for detecting the target location, before or after adapting to a gray field or to random sequences of a different set of dense or fatty images. Observers were faster at detecting the targets in either dense or fatty images after adapting to the specific background type (dense or fatty) that they were searching within. Thus, the adaptation led to a facilitation of search performance that was selective for the background texture. Our results are consistent with the hypothesis that adaptation allows observers to more effectively suppress the specific structure of the background, thereby heightening visual salience and search efficiency. PMID:25720760

  16. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  17. A consistent but non-coincident visual pattern facilitates the learning of spatial relations among locations.

    PubMed

    Katz, Scott S; Brown, Michael F; Sturz, Bradley R

    2014-02-01

    Human participants searched in a dynamic three-dimensional computer-generated virtual-environment open-field search task for four hidden goal locations arranged in a diamond configuration located in a 5 × 5 matrix of raised bins. Participants were randomly assigned to one of two groups: visual pattern or visual random. All participants experienced 30 trials in which four goal locations maintained the same spatial relations to each other (i.e., a diamond pattern), but this diamond pattern moved to random locations within the 5 × 5 matrix from trial to trial. For participants in the visual pattern group, four locations were marked in a distinct color and arranged in a diamond pattern that moved to a random location independent of the hidden spatial pattern from trial to trial throughout the experimental session. For participants in the visual random group, four random locations were marked with a distinct color and moved to random locations independent from the hidden spatial pattern from trial to trial throughout the experimental session. As a result, the visual cues for the visual pattern group were consistent but not coincident with the hidden spatial pattern, whereas the visual cues for the visual random group were neither consistent nor coincident with the hidden spatial pattern. Results indicated that participants in both groups learned the spatial configuration of goal locations and that the presence of consistent but noncoincident visual cues facilitated the learning of spatial relations among locations. PMID:23843178

  18. Immediate structured visual search for medical images.

    PubMed

    Simonyan, Karen; Zisserman, Andrew; Criminisi, Antonio

    2011-01-01

    The objective of this work is a scalable, real-time visual search engine for medical images. In contrast to existing systems that retrieve images that are globally similar to a query image, we enable the user to select a query Region Of Interest (ROI) and automatically detect the corresponding regions within all returned images. This allows the returned images to be ranked on the content of the ROI, rather than the entire image. Our contribution is two-fold: (i) immediate retrieval - the data is appropriately pre-processed so that the search engine returns results in real-time for any query image and ROI; (ii) structured output - returning ROIs with a choice of ranking functions. The retrieval performance is assessed on a number of annotated queries for images from the IRMA X-ray dataset and compared to a baseline. PMID:22003711

  19. Visual search performance by paranoid and chronic undifferentiated schizophrenics.

    PubMed

    Portnoff, L A; Yesavage, J A; Acker, M B

    1981-10-01

    Disturbances in attention are among the most frequent cognitive abnormalities in schizophrenia. Recent research has suggested that some schizophrenics have difficulty with visual tracking, which is suggestive of attentional deficits. To investigate differential visual-search performance by schizophrenics, 15 chronic undifferentiated and 15 paranoid schizophrenics were compared with 15 normals on two tests measuring visual search in a systematic and an unsystematic stimulus mode. Chronic schizophrenics showed difficulty with both kinds of visual-search tasks. In contrast, paranoids had only a deficit in the systematic visual-search task. Their ability for visual search in an unsystematized stimulus array was equivalent to that of normals. Although replication and cross-validation is needed to confirm these findings, it appears that the two tests of visual search may provide a useful ancillary method for differential diagnosis between these two types of schizophrenia. PMID:7312527

  20. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  1. Visual search from lab to clinic and back

    NASA Astrophysics Data System (ADS)

    Wolfe, Jeremy M.

    2014-03-01

    Many of the tasks of medical image perception can be understood as demanding visual search tasks (especially if you happen to be a visual search researcher). Basic research on visual search can tell us quite a lot about how medical image search tasks proceed because even experts have to use the human "search engine" with all its limitations. Humans can only deploy attention to one or a very few items at any one time. Human search is "guided" search. Humans deploy their attention to likely target objects on the basis of the basic visual features of object and on the basis of an understanding of the scene containing those objects. This guidance operates in medical images as well as in the mundane scenes of everyday life. The paper reviews some of the dialogue between medical image perception by experts and visual search as studied in the laboratory.

  2. Long-term visual search: Examining trial-by-trial learning over extended visual search experiences.

    PubMed

    Ericson, Justin; Biggs, Adam; Winkle, Jonathan; Gancayco, Christina; Mitroff, Stephen

    2015-01-01

    Airport security personnel search for a large number of prohibited items that vary in size, shape, color, category-membership, and more. This highly varied search set creates challenges for search accuracy, including how searchers are trained in identifying a myriad of potential targets. This challenge has both practical and theoretical implications (i.e., determining how best to obtain high accuracy, and how large memory sets interact with visual search performance, respectively). Recent research on "hybrid visual and memory search" (e.g., Wolfe, 2012) has begun to address such issues, but many questions remain. The current study addressed a difficult problem for traditional laboratory-based research-how does trial-by-trial learning develop over time for a large number of target types? This issue, which we call "long-term visual search," is key for understanding how reoccurring information in retained in memory so that it can aid future searches. Through the use of "big data" from the mobile application Airport Scanner (Kedlin Co.), it is possible to address such previously intractable questions. Airport Scanner is a game where players serve as an airport security officers looking for prohibited items in simulated bags. The game has over 7 million downloads and provides a powerful tool for psychological research (Mitroff et al., 2014 JEP:HPP). Trial-by-trial learning for multiple different targets was addressed by analyzing data from 50,000 participants. Distinct learning curves for each specific target revealed that accuracy rises asymptotically across trials without deteriorating to initially low starting levels. Additionally, an investigation into the number of to-be-searched-for target items indicated that performance accuracy remained high even as the memorized set size increased. The results suggest that items stored in memory generate their own item-specific template that is reinforced from repeated exposures. These findings offer insight into how novices develop into experts at target detection over the course of training. Meeting abstract presented at VSS 2015. PMID:26326796

  3. Visual abstraction of complex motion patterns

    NASA Astrophysics Data System (ADS)

    Janetzko, Halldr; Jckle, Dominik; Deussen, Oliver; Keim, Daniel A.

    2013-12-01

    Today's tracking devices allow high spatial and temporal resolutions and due to their decreasing size also an ever increasing number of application scenarios. However, understanding motion over time is quite difficult as soon as the resulting trajectories are getting complex. Simply plotting the data may obscure important patterns since trajectories over long time periods often include many revisits of the same place which creates a high degree of over-plotting. Furthermore, important details are often hidden due to a combination of large-scale transitions with local and small-scale movement patterns. We present a visualization and abstraction technique for such complex motion data. By analyzing the motion patterns and displaying them with visual abstraction techniques a synergy of aggregation and simplification is reached. The capabilities of the method are shown in real-world applications for tracked animals and discussed with experts from biology. Our proposed abstraction techniques reduce visual clutter and help analysts to understand the movement patterns that are hidden in raw spatiotemporal data.

  4. Activation of phonological competitors in visual search.

    PubMed

    Görges, Frauke; Oppermann, Frank; Jescheniak, Jörg D; Schriefers, Herbert

    2013-06-01

    Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed. PMID:23584102

  5. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  6. Searching for consensus patterns on a hypercube

    SciTech Connect

    Guan, X.; Mann, R.C.; Mural, R.; Uberbacher, E.

    1991-01-01

    In DNA sequence analysis, consensus patterns (those that are not precisely conserved in location or with the same sequence of letters) are frequently sought among a number of sequences to find important biological features. Sequential algorithms for finding consensus patterns are time-consuming due to the nature of the inexact occurrences of the patterns. Here we show that consensus pattern search can be done efficiently on the hypercube. We describe our implementation of two algorithms to find consensus patterns on an Intel iPSC/860 Hypercube and various techniques to speed up the computation. 4 refs., 1 fig., 2 tabs.

  7. Signatures of chaos in animal search patterns.

    PubMed

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  8. Signatures of chaos in animal search patterns

    PubMed Central

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  9. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  10. Investigating attention in complex visual search.

    PubMed

    Kovach, Christopher K; Adolphs, Ralph

    2015-11-01

    How we attend to and search for objects in the real world is influenced by a host of low-level and higher-level factors whose interactions are poorly understood. The vast majority of studies approach this issue by experimentally controlling one or two factors in isolation, often under conditions with limited ecological validity. We present a comprehensive regression framework, together with a matlab-implemented toolbox, which allows concurrent factors influencing saccade targeting to be more clearly distinguished. Based on the idea of gaze selection as a point process, the framework allows each putative factor to be modeled as a covariate in a generalized linear model, and its significance to be evaluated with model-based hypothesis testing. We apply this framework to visual search for faces as an example and demonstrate its power in detecting effects of eccentricity, inversion, task congruency, emotional expression, and serial fixation order on the targeting of gaze. Among other things, we find evidence for multiple goal-related and goal-independent processes that operate with distinct visuotopy and time course. PMID:25499190

  11. The effect of visual entrainment on target detection in visual search.

    PubMed

    Pastuszak, Aleksandra; Hanslmayr, Simon; Shapiro, Kimron

    2015-01-01

    A growing body of research has associated brain oscillations with the cognitive process of selective attention, as well as visual perception. Modulation of alpha frequency band (8-14Hz) has been related to changes in perception and attention with an inverse correlation between alpha amplitude and perceptual ability. To date the association between alpha and target detection has been shown in numerous studies but using only near-threshold stimuli. Here we attempt to study the extent to which visual entrainment at alpha and non-alpha frequencies will affect attention in a more complex, higher level target detection task. In the current experiment subjects took part in a visual search task where they were instructed to find a target among a set of distractors. The visual search display was preceded by a stimulus which either flickered in alpha (10Hz), non-alpha (random flicker sequence with an average of 10 Hz) frequency or was a static control. Given that alpha entrainment has been shown to enhance endogenous alpha levels, we predicted alpha entrainment to increase reaction time (RT) for target detection. On the other hand, as non-alpha stimulation should prevent alpha synchronisation, we predicted shorter RTs in this condition. The results reveal that non-alpha flicker stimulation resulted in significantly faster target detection in the visual search task compared to the control condition. Alpha entrainment gave rise to marginally slower RTs than non-alpha, while at the same time marginally faster than the control. The pattern of results suggests that entrainment allows for quicker responses than in the static control condition. These results are in line with research showing that inhibition of alpha is associated with better attentional and perceptual performance. Meeting abstract presented at VSS 2015. PMID:26326937

  12. Visual search and eye movements in novel and familiar contexts

    NASA Astrophysics Data System (ADS)

    McDermott, Kyle; Mulligan, Jeffrey B.; Bebis, George; Webster, Michael A.

    2006-02-01

    Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.

  13. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  14. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  15. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  16. Spatiotemporal Segregation in Visual Search: Evidence from Parietal Lesions

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Humphreys, Glyn W.

    2004-01-01

    The mechanisms underlying segmentation and selection of visual stimuli over time were investigated in patients with posterior parietal damage. In a modified visual search task, a preview of old objects preceded search of a new set for a target while the old items remained. In Experiment 1, control participants ignored old and prioritized new…

  17. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,

  18. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N=163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated

  19. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  20. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  1. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  2. Reward and Attentional Control in Visual Search

    PubMed Central

    Anderson, Brian A.; Wampler, Emma K.; Laurent, Patryk A.

    2015-01-01

    It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction—even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity. PMID:23437631

  3. System reconfiguration, not resource depletion, determines the efficiency of visual search.

    PubMed

    Di Lollo, Vincent; Smilek, Daniel; Kawahara, Jun-Ichiro; Ghorashi, S M Shahab

    2005-08-01

    We examined two theories of visual search: resource depletion, grounded in a static, built-in brain architecture, with attention seen as a limited depletable resource, and system reconfiguration, in which the visual system is dynamically reconfigured from moment to moment so as to optimize performance on the task at hand. In a dual-task paradigm, a search display was preceded by a visual discrimination task and was followed, after a stimulus onset asynchrony (SOA) governed by a staircase procedure, by a pattern mask. Search efficiency, as indexed by the slope of the function relating critical SOA to number of distractors, was impaired under dual-task conditions for tasks that were performed efficiently (shallow search slope) when done singly, but not for tasks performed inefficiently (steep slope) when done singly. These results are consistent with system reconfiguration, but not with resource depletion, models and point to a dynamic, rather than a static, architecture of the visual system. PMID:16396015

  4. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

  5. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly. PMID:23959603

  6. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  7. Operator-centric design patterns for information visualization software

    NASA Astrophysics Data System (ADS)

    Xie, Zaixian; Guo, Zhenyu; Ward, Matthew O.; Rundensteiner, Elke A.

    2010-01-01

    Design patterns have proven to be a useful means to make the process of designing, developing, and reusing software systems more efficient. In the area of information visualization, researchers have proposed design patterns for different functional components of the visualization pipeline. Since many visualization techniques need to display derived data as well as raw data, the data transformation stage is very important in the pipeline, yet existing design patterns are, in general, not sufficient to implement these data transformation techniques. In this paper, we propose two design patterns, operatorcentric transformation and data modifier, to facilitate the design of data transformations for information visualization systems. The key idea is to use operators to describe the data derivation and introduce data modifiers to represent the derived data. We also show that many interaction techniques can be regarded as operators as defined here, thus these two design patterns could support a wide range of visualization techniques. In addition, we describe a third design pattern, modifier-based visual mapping, that can generate visual abstraction via linking data modifiers to visual attributes. We also present a framework based on these three design patterns that supports coordinated multiple views. Several examples of multivariate visualizations are discussed to show that our design patterns and framework can improve the reusability and extensibility of information visualization systems. Finally, we explain how we have ported an existing visualization tool (XmdvTool) from its old data-centric structure to a new structure based on the above design patterns and framework.

  8. Development of logical search and visual scanning in children.

    PubMed

    Dickerson, D J; Goldfield, E C

    1981-11-01

    Children at 5, 6, 7, and 8 years of age were told a story in which a doll traveled through a physical model of a house. The doll had a toy in one room and, then, three rooms after, discovered it to be missing. Tests were made of logical search (i.e., the tendency, when asked to find the toy, to restrict search to the critical area between the rooms). The presence of visual cues that marked the critical area was manipulated, and the visual scanning of these cues was monitored. Children in the two younger groups exhibited virtually no logical search either when cues were absent or present, 7-year-old children showed logical search only when cues were present, and 8-year-old children showed logical search both when cues were absent and present. In the cues-present conditions logical search was related to visual scanning of the cues. PMID:7308741

  9. Visual search using realistic camouflage: countershading is highly effective at deterring search.

    PubMed

    Penacchio, Olivier; Lovell, George; Sanghera, Simon; Cuthill, Innes; Ruxton, Graeme; Harris, Julie

    2015-01-01

    One of the most widespread patterns of colouration in the animal kingdom is countershading, a gradation of colour in which body parts that face a higher light intensity are darker. Countershading may help counterbalance the shadowing created by directional light, and, hence, reduce 3D object recognition via shape-from-shading. There is evidence that other animals, as well as humans, derive information on shape from shading. Here, we assessed experimentally the effect of optimising countershading camouflage on detection speed and accuracy, to explore whether countershading needs to be fine-tuned to achieve crypsis. We used a computational 3D world that included ecologically realistic lighting patterns. We defined 3D scenes with elliptical 'distractor' leaves and an ellipsoid target object. The scenes were rendered with different types of illumination and the target objects were endowed with different levels of camouflage: none at all, a countershading pattern optimized for the light distribution of the scene and target orientation in space, or optimized for a different illuminant. Participants (N=12) were asked to detect the target 3D object in the scene as fast as possible. The results showed a very significant effect of countershading camouflage on detection rate and accuracy. The extent to which the countershading pattern departed from the optimal pattern for the actual lighting condition and orientation of the target object had a strong effect on detection performance. This experiment showed that appropriate countershading camouflage strongly interferes with visual search by decreasing detection rate and accuracy. A field predation experiment using birds, based on similar stimuli, showed similar effects. Taken together, this suggests that countershading obstructs efficient visual search across species and reduces visibility, thus enhancing survival in prey animals that adopt it. Meeting abstract presented at VSS 2015. PMID:26326656

  10. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  11. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  12. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)

  13. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  14. Do People Take Stimulus Correlations into Account in Visual Search?

    PubMed Central

    Bhardwaj, Manisha; van den Berg, Ronald

    2016-01-01

    In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks. PMID:26963498

  15. Coarse-to-fine eye movement strategy in visual search.

    PubMed

    Over, E A B; Hooge, I T C; Vlaskamp, B N S; Erkelens, C J

    2007-08-01

    Oculomotor behavior contributes importantly to visual search. Saccadic eye movements can direct the fovea to potentially interesting parts of the visual field. Ensuing stable fixations enables the visual system to analyze those parts. The visual system may use fixation duration and saccadic amplitude as optimizers for visual search performance. Here we investigate whether the time courses of fixation duration and saccade amplitude depend on the subject's knowledge of the search stimulus, in particular target conspicuity. We analyzed 65,000 saccades and fixations in a search experiment for (possibly camouflaged) military vehicles of unknown type and size. Mean saccade amplitude decreased and mean fixation duration increased gradually as a function of the ordinal saccade and fixation number. In addition we analyzed 162,000 saccades and fixations recorded during a search experiment in which the location of the target was the only unknown. Whether target conspicuity was constant or varied appeared to have minor influence on the time courses of fixation duration and saccade amplitude. We hypothesize an intrinsic coarse-to-fine strategy for visual search that is even used when such a strategy is not optimal. PMID:17617434

  16. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a

  17. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  18. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our

  19. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  20. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  1. Emotional expressions and visual search efficiency: specificity and effects of anxiety symptoms.

    PubMed

    Olatunji, Bunmi O; Ciesielski, Bethany G; Armstrong, Thomas; Zald, David H

    2011-10-01

    Although facial expressions are thought to vary in their functional impact on perceivers, experimental demonstration of the differential effects of facial expressions on behavior are lacking. In the present study, we examined the effects of exposure to facial expressions on visual search efficiency. Participants (n = 31) searched for a target in a 12 location circle array after exposure to an angry, disgusted, fearful, happy, or neutral facial expression for 100 ms or 500 ms. Consistent with predictions, exposure to a fearful expression prior to visual search resulted in faster target identification compared to exposure to other facial expressions. The effects of other facial expressions on visual search did not differ from each other. The fear facilitating effect on visual search efficiency was observed at 500-ms but not at 100-ms presentations, suggesting a specific temporal course of the facilitation. Subsequent analysis also revealed that individual differences in fear of negative evaluation, trait anxiety, and obsessive-compulsive symptoms possess a differential pattern of association with visual search efficiency. The experimental and clinical implications of these findings are discussed. PMID:21517160

  2. Individual Differences and Metacognitive Knowledge of Visual Search Strategy

    PubMed Central

    Proulx, Michael J.

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions. PMID:22066030

  3. Searching for inhibition of return in visual search: a review.

    PubMed

    Wang, Zhiguo; Klein, Raymond M

    2010-01-01

    Studies that followed the covert and overt probe-following-search paradigms of Klein (1988) and Klein and MacInnes (1999) to explore inhibition of return (IOR) in search are analyzed and evaluated. An IOR effect is consistently observed when the search display (or scene) remains visible when probing and lasts for at least 1000ms or about four previous inspected items (or locations). These findings support the idea that IOR facilitates foraging by discouraging orienting toward previously examined regions and items. Methodological and conceptual issues are discussed leading to methodological recommendations and suggestions for experimentation. PMID:19932128

  4. A Summary Statistic Representation in Peripheral Vision Explains Visual Search

    PubMed Central

    Rosenholtz, Ruth; Huang, Jie; Raj, Alvin; Balas, Benjamin J.; Ilie, Livia

    2014-01-01

    Vision is an active process: we repeatedly move our eyes to seek out objects of interest and explore our environment. Visual search experiments capture aspects of this process, by having subjects look for a target within a background of distractors. Search speed often correlates with target-distractor discriminability; search is faster when the target and distractors look quite different. However, there are notable exceptions. A given discriminability can yield efficient searches (where the target seems to “pop-out”) as well as inefficient ones (where additional distractors make search significantly slower and more difficult). Search is often more difficult when finding the target requires distinguishing a particular configuration or conjunction of features. Search asymmetries abound. These puzzling results have fueled three decades of theoretical and experimental studies. We argue that the key issue in search is the processing of image patches in the periphery, where visual representation is characterized by summary statistics computed over a sizable pooling region. By quantifying these statistics, we predict a set of classic search results, as well as peripheral discriminability of crowded patches such as those found in search displays. PMID:22523401

  5. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  6. Hiding and finding: the relationship between visual concealment and visual search.

    PubMed

    Smilek, Daniel; Weinheimer, Laura; Kwan, Donna; Reynolds, Mike; Kingstone, Alan

    2009-11-01

    As an initial step toward developing a theory of visual concealment, we assessed whether people would use factors known to influence visual search difficulty when the degree of concealment of objects among distractors was varied. In Experiment 1, participants arranged search objects (shapes, emotional faces, and graphemes) to create displays in which the targets were in plain sight but were either easy or hard to find. Analyses of easy and hard displays created during Experiment 1 revealed that the participants reliably used factors known to influence search difficulty (e.g., eccentricity, target-distractor similarity, presence/absence of a feature) to vary the difficulty of search across displays. In Experiment 2, a new participant group searched for the targets in the displays created by the participants in Experiment 1. Results indicated that search was more difficult in the hard than in the easy condition. In Experiments 3 and 4, participants used presence versus absence of a feature to vary search difficulty with several novel stimulus sets. Taken together, the results reveal a close link between the factors that govern concealment and the factors known to influence search difficulty, suggesting that a visual search theory can be extended to form the basis of a theory of visual concealment. PMID:19933563

  7. The effect of face inversion on the detection of emotional faces in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V

    2015-01-01

    Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms. PMID:25229360

  8. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.

    PubMed

    Leff, Daniel R; James, David R C; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  9. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  10. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  11. Visual Search and the Collapse of Categorization

    ERIC Educational Resources Information Center

    David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

    2005-01-01

    Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

  12. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  13. Patterns of Bibliographic Searching among Israeli High School Students.

    ERIC Educational Resources Information Center

    Shoham, Snunith; Getz, Irith

    1988-01-01

    This study looked for patterns in the search tactics and behavior of Israeli high school students. A sample of 200 students that were engaged in final projects filled out questionnaires to reconstruct their bibliographic searches. The relation between search patterns and student characteristics--bibliographic instruction, project adviser, and home…

  14. Conjunctive visual search in individuals with and without mental retardation.

    PubMed

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our goal was to determine the effect of decreasing target-distracter disparities on visual search efficiency. Results showed that search rates for the group with mental retardation and the MA-matched comparisons were more negatively affected by decreasing disparities than were those of the CA-matched group. The group with mental retardation and the MA-matched group performed similarly on all tasks. Implications for theory and application are discussed. PMID:17181391

  15. Synaesthetic colours do not camouflage form in visual search

    PubMed Central

    Gheri, C; Chopping, S; Morgan, M.J

    2008-01-01

    One of the major issues in synaesthesia research is to identify the level of processing involved in the formation of the subjective colours experienced by synaesthetes: are they perceptual phenomena or are they due to memory and association learning? To address this question, we tested whether the colours reported by a group of grapheme-colour synaesthetes (previously studied in an functional magnetic resonance imaging experiment) influenced them in a visual search task. As well as using a condition where synaesthetic colours should have aided visual search, we introduced a condition where the colours experienced by synaesthetes would be expected to make them worse than controls. We found no evidence for differences between synaesthetes and normal controls, either when colours should have helped them or where they should have hindered. We conclude that the colours reported by our population of synaesthetes are not equivalent to perceptual signals, but arise at a cognitive level where they are unable to affect visual search. PMID:18182374

  16. Learned face-voice pairings facilitate visual search

    PubMed Central

    Zweig, L. Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2014-01-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  17. Losing the trees for the forest in dynamic visual search.

    PubMed

    Jardine, Nicole L; Moore, Cathleen M

    2016-05-01

    Representing temporally continuous objects across change (e.g., in position) requires integration of newly sampled visual information with existing object representations. We asked what consequences representational updating has for visual search. In this dynamic visual search task, bars rotated around their central axis. Observers searched for a single episodic target state (oblique bar among vertical and horizontal bars). Search was efficient when the target display was presented as an isolated static display. Performance declined to near chance, however, when the same display was a single state of a dynamically changing scene (Experiment 1), as though temporal selection of the target display from the stream of stimulation failed entirely (Experiment 3). The deficit is attributable neither to masking (Experiment 2), nor to a lack of temporal marker for the target display (Experiment 4). The deficit was partially reduced by visually marking the target display with unique feature information (Experiment 5). We suggest that representational updating causes a loss of access to instantaneous state information in search. Similar to spatially crowded displays that are perceived as textures (Parkes, Lund, Angelucci, Solomon, & Morgan, 2001), we propose a temporal version of the trees (instantaneous orientation information) being lost for the forest (rotating bars). (PsycINFO Database Record PMID:26689307

  18. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search

    PubMed Central

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-01-01

    Background Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. Aims The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Materials and methods Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. Results The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. Conclusion We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI. PMID:24683515

  19. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  20. Visual exploratory search of relationship graphs on smartphones.

    PubMed

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones' user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users' capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  1. Perceptual basis of redundancy gains in visual pop-out search.

    PubMed

    Töllner, Thomas; Zehetleitner, Michael; Krummenacher, Joseph; Müller, Hermann J

    2011-01-01

    The redundant-signals effect (RSE) refers to a speed-up of RT when the response is triggered by two, rather than just one, response-relevant target elements. Although there is agreement that in the visual modality RSEs observed with dimensionally redundant signals originating from the same location are generated by coactive processing architectures, there has been a debate as to the exact stage(s)--preattentive versus postselective--of processing at which coactivation arises. To determine the origin(s) of redundancy gains in visual pop-out search, the present study combined mental chronometry with electrophysiological markers that reflect purely preattentive perceptual (posterior-contralateral negativity [PCN]), preattentive and postselective perceptual plus response selection-related (stimulus-locked lateralized readiness potential [LRP]), or purely response production-related processes (response-locked LRP). As expected, there was an RSE on target detection RTs, with evidence for coactivation. At the electrophysiological level, this pattern was mirrored by an RSE in PCN latencies, whereas stimulus-locked LRP latencies showed no RSE over and above the PCN effect. Also, there was no RSE on the response-locked LRPs. This pattern demonstrates a major contribution of preattentive perceptual processing stages to the RSE in visual pop-out search, consistent with parallel-coactive coding of target signals in multiple visual dimensions [Müller, H. J., Heller, D., & Ziegler, J. Visual search for singleton feature targets within and across feature dimensions. PMID:20044891

  2. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  3. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  4. The effect of a visual indicator on rate of visual search Evidence for processing control

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    Search rates were estimated from response latencies in a visual search task of the type used by Atkinson et al. (1969), in which a subject searches a small set of letters to determine the presence or absence of a predesignated target. Half of the visual displays contained a marker above one of the letters. The marked letter was the only one that had to be checked to determine whether or not the display contained the target. The presence of a marker in a display significantly increased the estimated rate of search, but the data clearly indicated that subjects did not restrict processing to the marked item. Letters in the vicinity of the marker were also processed. These results were interpreted as showing that subjects are able to exercise some degree of control over the search process in this type of task.

  5. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    SciTech Connect

    Osboum, Gordon C.; Martinez, Rubel F.; Bartholomew, John W.

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens in the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.

  6. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    Energy Science and Technology Software Center (ESTSC)

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.« less

  7. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  8. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  9. The role of memory for visual search in scenes

    PubMed Central

    Võ, Melissa Le-Hoa; Wolfe, Jeremy M.

    2014-01-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. While a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  10. Hemispatial neglect on visual search tasks in Alzheimer's disease.

    PubMed

    Mendez, M F; Cherrier, M M; Cymerman, J S

    1997-07-01

    Abnormal visual attention may underlie certain visuospatial difficulties in patients with Alzheimer's disease (AD). These patients have hypometabolism and neuropathology in parietal cortex. Given the role of parietal function for visuospatial attention, patients with AD may have relative hemispatial neglect masked by other cognitive disturbances. Fifteen patients with-to-moderate AD and 15 healthy elderly controls matched for age, sex, and education were compared on four measures of neglect the visual search of a complex picture, a letter cancellation task, the Schenkenberg line bisection test, and a computerized line bisection task. Compared with controls, the group with AD was significantly impaired overall in attending to left hemispace on both picture search (F[1,56] = 11.27, p < 0.05) and cancellation tasks (F[1,112] = 12.68, p < 0.01); however, a subgroup of patients with AD had disproportionate difficulty in attending to right hemispace. The performance of the groups did not differ on either of the line bisection tasks regardless of the hand used. In AD, hemispatial neglect on visual search tasks may relate to difficulty in disengaging attention or in visual exploration, as well as to the severity of the disease. Future investigations may implicate neglect in visually related deficits in AD, for example, the prominent difficulty with left turns on driving a car. PMID:9297714

  11. Visual Detection of Multi-Letter Patterns.

    ERIC Educational Resources Information Center

    Staller, Joshua D.; Lappin, Joseph S.

    1981-01-01

    In three experiments, this study addressed two basic questions about the detection of multiletter patterns: (1) How is the detection of a multiletter pattern related to the detection of its individual components? (2) How is the detection of a sequence of letters influenced by the observer's familiarity with that sequence? (Author/BW)

  12. Attention during visual search: The benefit of bilingualism

    PubMed Central

    Friesen, Deanna C; Latman, Vered; Calvo, Alejandra; Bialystok, Ellen

    2015-01-01

    Aims and Objectives/Purpose/Research Questions Following reports showing bilingual advantages in executive control (EC) performance, the current study investigated the role of selective attention as a foundational skill that might underlie these advantages. Design/Methodology/Approach Bilingual and monolingual young adults performed a visual search task by determining whether a target shape was present amid distractor shapes. Task difficulty was manipulated by search type (feature or conjunction) and by the number and discriminability of the distractors. In feature searches, the target (e.g., green triangle) differed on a single dimension (e.g., color) from the distractors (e.g., yellow triangles); in conjunction searches, two types of distractors (e.g., pink circles and turquoise squares) each differed from the target (e.g., turquoise circle) on a single but different dimension (e.g., color or shape). Data and Analysis Reaction time and accuracy data from 109 young adults (53 monolinguals and 56 bilinguals) were analyzed using a repeated-measures analysis of variance. Group membership, search type, number and discriminability of distractors were the independent variables. Findings/Conclusions Participants identified the target more quickly in the feature searches, when the target was highly discriminable from the distractors and when there were fewer distractors. Importantly, although monolinguals and bilinguals performed equivalently on the feature searches, bilinguals were significantly faster than monolinguals in identifying the target in the more difficult conjunction search, providing evidence for better control of visual attention in bilinguals Originality Unlike previous studies on bilingual visual attention, the current study found a bilingual attention advantage in a paradigm that did not include a Stroop-like manipulation to set up false expectations. Significance/Implications Thus, our findings indicate that the need to resolve explicit conflict or overcome false expectations is unnecessary for observing a bilingual advantage in selective attention. Observing this advantage in a fundamental skill suggests that it may underlie higher order bilingual advantages in EC. PMID:26640399

  13. Eye-Search: A web-based therapy that improves visual search in hemianopia

    PubMed Central

    Ong, Yean-Hoon; Jacquin-Courtois, Sophie; Gorgoraptis, Nikos; Bays, Paul M; Husain, Masud; Leff, Alexander P

    2015-01-01

    Persisting hemianopia frequently complicates lesions of the posterior cerebral hemispheres, leaving patients impaired on a range of key activities of daily living. Practice-based therapies designed to induce compensatory eye movements can improve hemianopic patients' visual function, but are not readily available. We used a web-based therapy (Eye-Search) that retrains visual search saccades into patients' blind hemifield. A group of 78 suitable hemianopic patients took part. After therapy (800 trials over 11 days), search times into their impaired hemifield improved by an average of 24%. Patients also reported improvements in a subset of visually guided everyday activities, suggesting that Eye-Search therapy affects real-world outcomes. PMID:25642437

  14. Early activation of object names in visual search.

    PubMed

    Meyer, Antje S; Belke, Eva; Telling, Anna L; Humphreys, Glyn W

    2007-08-01

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention. PMID:17972738

  15. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search

    PubMed Central

    Müller, Notger G.; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  16. The Mechanisms Underlying the ASD Advantage in Visual Search.

    PubMed

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470

  17. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  18. Animation of orthogonal texture patterns for vector field visualization.

    PubMed

    Bachthaler, Sven; Weiskopf, Daniel

    2008-01-01

    This paper introduces orthogonal vector field visualization on 2D manifolds: a representation by lines that are perpendicular to the input vector field. Line patterns are generated by line integral convolution (LIC). This visualization is combined with animation based on motion along the vector field. This decoupling of the line direction from the direction of animation allows us to choose the spatial frequencies along the direction of motion independently from the length scales along the LIC line patterns. Vision research indicates that local motion detectors are tuned to certain spatial frequencies of textures, and the above decoupling enables us to generate spatial frequencies optimized for motion perception. Furthermore, we introduce a combined visualization that employs orthogonal LIC patterns together with conventional, tangential streamline LIC patterns in order to benefit from the advantages of these two visualization approaches. In addition, a filtering process is described to achieve a consistent and temporally coherent animation of orthogonal vector field visualization. Different filter kernels and filter methods are compared and discussed in terms of visualization quality and speed. We present respective visualization algorithms for 2D planar vector fields and tangential vector fields on curved surfaces, and demonstrate that those algorithms lend themselves to efficient and interactive GPU implementations. PMID:18467751

  19. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The performance of this system can be improved over time as more training data are accumulated. This AI system has been integrated into the PALFA survey pipeline and has discovered six new pulsars to date.

  20. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The performance of this system can be improved over time as more training data are accumulated. This AI system has been integrated into the PALFA survey pipeline and has discovered six new pulsars to date.

  1. A bayesian optimal foraging model of human visual search.

    PubMed

    Cain, Matthew S; Vul, Edward; Clark, Kait; Mitroff, Stephen R

    2012-09-01

    Real-world visual searches often contain a variable and unknown number of targets. Such searches present difficult metacognitive challenges, as searchers must decide when to stop looking for additional targets, which results in high miss rates in multiple-target searches. In the study reported here, we quantified human strategies in multiple-target search via an ecological optimal foraging model and investigated whether searchers adapt their strategies to complex target-distribution statistics. Separate groups of individuals searched displays with the number of targets per trial sampled from different geometric distributions but with the same overall target prevalence. As predicted by optimal foraging theory, results showed that individuals searched longer when they expected more targets to be present and adjusted their expectations on-line during each search by taking into account the higher-order, across-trial target distributions. However, compared with modeled ideal observers, participants systematically responded as if the target distribution were more uniform than it was, which suggests that training could improve multiple-target search performance. PMID:22868494

  2. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  3. Memorizing and Copying Visual Patterns: A Piagetian Interpretation.

    ERIC Educational Resources Information Center

    Chap, Janet Blum; Ross, Bruce M.

    1979-01-01

    In order to determine whether mistakes committed by younger children are the result of retention mistakes rather than faulty perceptual encoding, twenty children (6, 8, 10, and 12 years old) reconstructed two visual patterns from immediate memory, while twenty other children (5 and 6 years old) reconstructed the identical patterns by direct…

  4. Sequential pattern data mining and visualization

    SciTech Connect

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  5. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  6. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  7. Macular degeneration affects eye movement behavior during visual search.

    PubMed

    Van der Stigchel, Stefan; Bethlehem, Richard A I; Klein, Barrie P; Berendschot, Tos T J M; Nijboer, Tanja C W; Dumoulin, Serge O

    2013-01-01

    Patients with a scotoma in their central vision (e.g., due to macular degeneration, MD) commonly adopt a strategy to direct the eyes such that the image falls onto a peripheral location on the retina. This location is referred to as the preferred retinal locus (PRL). Although previous research has investigated the characteristics of this PRL, it is unclear whether eye movement metrics are modulated by peripheral viewing with a PRL as measured during a visual search paradigm. To this end, we tested four MD patients in a visual search paradigm and contrasted their performance with a healthy control group and a healthy control group performing the same experiment with a simulated scotoma. The experiment contained two conditions. In the first condition the target was an unfilled circle hidden among c-shaped distractors (serial condition) and in the second condition the target was a filled circle (pop-out condition). Saccadic search latencies for the MD group were significantly longer in both conditions compared to both control groups. Results of a subsequent experiment indicated that this difference between the MD and the control groups could not be explained by a difference in target selection sensitivity. Furthermore, search behavior of MD patients was associated with saccades with smaller amplitudes toward the scotoma, an increased intersaccadic interval and an increased number of eye movements necessary to locate the target. Some of these characteristics, such as the increased intersaccadic interval, were also observed in the simulation group, which indicate that these characteristics are related to the peripheral viewing itself. We suggest that the combination of the central scotoma and peripheral viewing can explain the altered search behavior and no behavioral evidence was found for a possible reorganization of the visual system associated with the use of a PRL. Thus the switch from a fovea-based to a PRL-based reference frame impairs search efficiency. PMID:24027546

  8. Time Course of Target Recognition in Visual Search

    PubMed Central

    Kotowicz, Andreas; Rutishauser, Ueli; Koch, Christof

    2009-01-01

    Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation (∼170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. PMID:20428512

  9. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  10. When do I quit? The search termination problem in visual search.

    PubMed

    Wolfe, Jeremy M

    2012-01-01

    In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory. PMID:23437634

  11. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  12. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial

  13. Visual Acceleration Perception for Simple and Complex Motion Patterns.

    PubMed

    Mueller, Alexandra S; Timney, Brian

    2016-01-01

    Humans are able to judge whether a target is accelerating in many viewing contexts, but it is an open question how the motion pattern per se affects visual acceleration perception. We measured acceleration and deceleration detection using patterns of random dots with horizontal (simpler) or radial motion (more visually complex). The results suggest that we detect acceleration better when viewing radial optic flow than horizontal translation. However, the direction within each type of pattern has no effect on performance and observers detect acceleration and deceleration similarly within each condition. We conclude that sensitivity to the presence of acceleration is generally higher for more complex patterns, regardless of the direction within each type of pattern or the sign of acceleration. PMID:26901879

  14. Visual Acceleration Perception for Simple and Complex Motion Patterns

    PubMed Central

    Mueller, Alexandra S.; Timney, Brian

    2016-01-01

    Humans are able to judge whether a target is accelerating in many viewing contexts, but it is an open question how the motion pattern per se affects visual acceleration perception. We measured acceleration and deceleration detection using patterns of random dots with horizontal (simpler) or radial motion (more visually complex). The results suggest that we detect acceleration better when viewing radial optic flow than horizontal translation. However, the direction within each type of pattern has no effect on performance and observers detect acceleration and deceleration similarly within each condition. We conclude that sensitivity to the presence of acceleration is generally higher for more complex patterns, regardless of the direction within each type of pattern or the sign of acceleration. PMID:26901879

  15. Invisible Calibration Pattern based on Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Takimoto, Hironori; Yoshimori, Seiki; Mitsukura, Yasue; Fukumi, Minoru

    In this paper, we propose an arrangement and detection method of an invisible calibration pattern based on characteristics of human visual perception. A calibration pattern is arranged around contents where invisible data is embedded, as some feature points between an original image and the scanned image for normalization of the scanned image. However, it is clear that conventional methods interfere with page layout and artwork of contents. Moreover, conventional visible patterns show a third person the position of embedded data. Therefore, visible calibration patterns are not suitable for security service. The most important part of human visual perception in the proposed method is the spectral luminous efficiency characteristic and the chromatic spatial frequency characteristic. In addition, a back ground color in surrounding of contents is not restricted to uniform color by using the proposed calibration pattern. It is suggest that the proposed method protect page layout and artwork.

  16. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  17. Reading and Visual Search: A Developmental Study in Normal Children

    PubMed Central

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

  18. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics. PMID:17546747

  19. Automatic guidance of attention during real-world visual search.

    PubMed

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  20. Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes

    PubMed Central

    Preston, Tim J.; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P.

    2014-01-01

    Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers’ expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC’s ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers’ behavioral task, the MVPA analysis of LOC and the other areas’ activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. PMID:23637176

  1. Neural substrates for visual pattern recognition learning in Igo.

    PubMed

    Itoh, Kosuke; Kitamura, Hideaki; Fujii, Yukihiko; Nakada, Tsutomu

    2008-08-28

    Different contexts require different visual pattern recognitions even for identical retinal inputs, and acquiring expertise in various visual-cognitive skills requires long-term training to become capable of recognizing relevant visual patterns in otherwise ambiguous stimuli. This 3-Tesla fMRI experiment exploited shikatsu-mondai (life-or-death problems) in the Oriental board game of Igo (Go) to identify the neural substrates supporting this gradual and adaptive learning. In shikatsu-mondai, the player adds stones to the board with the objective of making, or preventing the opponent from making nigan (two eyes), or the topology of figure of eight, with these stones. Without learning the game, passive viewing of shikatsu-mondai activated the occipito-temporal cortices, reflecting visual processing without the recognition of nigan. Several days after two-hour training, passive viewing of the same stimuli additionally activated the premotor area, intraparietal sulcus, and a visual area near the junction of the (left) intraparietal and transverse occipital sulci, demonstrating plastic changes in neuronal responsivity to the stimuli that contained indications of nigan. Behavioral tests confirmed that the participants had successfully learned to recognize nigan and solve the problems. In the newly activated regions, the level of neural activity while viewing the problems correlated positively with the level of achievement in learning. These results conformed to the hypothesis that recognition of a newly learned visual pattern is supported by the activities of fronto-parietal and visual cortical neurons that interact via newly formed functional connections among these regions. These connections would provide the medium by which the fronto-parietal system modulates visual cortical activity to attain behaviorally relevant perceptions. PMID:18621033

  2. Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG.

    PubMed

    Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A

    2016-05-15

    Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity. PMID:26899210

  3. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  4. Recognizing patterns of visual field loss using unsupervised machine learning

    PubMed Central

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-01-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss. PMID:25593676

  5. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

  6. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  7. Discovering Visual Scanning Patterns in a Computerized Cancellation Test

    ERIC Educational Resources Information Center

    Huang, Ho-Chuan; Wang, Tsui-Ying

    2013-01-01

    The purpose of this study was to develop an attention sequential mining mechanism for investigating the sequential patterns of children's visual scanning process in a computerized cancellation test. Participants had to locate and cancel the target amongst other non-targets in a structured form, and a random form with Chinese stimuli. Twenty-three…

  8. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  9. Differences between fovea and parafovea in visual search processes.

    PubMed

    Fiorentini, A

    1989-01-01

    Visual objects that differ from the surroundings for some simple feature, e.g. colour or line orientation, or for some shape parameters ("textons", Julez, 1986) are believed to be detected in parallel from different locations in the visual field without requiring a serial search process. Tachistoscopic presentations of textures were used to compare the time course of search processes in the fovea and parafovea. Detection of targets differing for a simple feature (line orientation or line crossings) from the surrounding elements was found to have a time course typical of parallel processing for coarse textures extending into the parafovea. For fine textures confined into the fovea the time course was suggestive of a serial search process even for these textons. These findings are consistent with the hypothesis that parallel processing of lines or crossings is subserved by a coarse network of detectors with relatively large receptive field and low resolution. For the counting of coloured spots in a background of a different colour the parafovea has the same time requirements as the fovea. PMID:2617862

  10. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  11. Visualizing Information in the Biological Sciences: Using WebTheme to Visualize Internet Search Results

    SciTech Connect

    Buxton, Karen A.; Lembo, Mary Frances

    2003-08-11

    Information visualization is an effective method for displaying large data sets in a pictorial or graphical format. The visualization aids researchers and analysts in understanding data by evaluating the content and grouping documents together around themes and concepts. With the ever-growing amount of information available on the Internet, additional methods are needed to analyze and interpret data. WebTheme allows users to harvest thousands of web pages and automatically organize and visualize their contents. WebTheme is an interactive web-based product that provides a new way to investigate and understand large volumes of HTML text-based information. It has the ability to harvest data from the World Wide Web using search terms and selected search engines or by following URLs chosen by the user. WebTheme enables users to rapidly identify themes and concepts found among thousands of pages of text harvested and provides a suite of tools to further explore and analyze special areas of interest within a data set. WebTheme was developed at Pacific Northwest National Laboratory (PNNL) for NASA as a method for generating meaningful, thematic, and interactive visualizations. Through a collaboration with the Laboratory's Information Science and Engineering (IS&E) group, information specialists are providing demonstrations of WebTheme and assisting researchers in analyzing their results. This paper will provide a brief overview of the WebTheme product, and the ways in which the Hanford Technical Library's information specialists are assisting researchers in using this product.

  12. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  13. Pattern-visual evoked potentials in thinner abusers.

    PubMed

    Poblano, A; Lope Huerta, M; Martínez, J M; Falcón, H D

    1996-01-01

    Organic solvents cause injury to lipids of neuronal and glial membranes. A well known characteristic of workers exposed to thinner is optic neuropathy. We decided to look for neurophysiologic signs of visual damage in patients identified as thinner abusers. Pattern reversal visual evoked potentials was performed on 34 thinner abuser patients and 30 controls. P-100 wave latency was found to be longer on abuser than control subjects. Results show the possibility of central alterations on thinner abusers despite absence of clinical symptoms. PMID:8987190

  14. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  15. Characterizing the development of visual search expertise in pathology residents viewing whole slide images.

    PubMed

    Krupinski, Elizabeth A; Graham, Anna R; Weinstein, Ronald S

    2013-03-01

    The goal of this study was to examine and characterize changes in the ways that pathology residents examine digital whole slide images as they progress through the residency training. A series of 20 digitized breast biopsy whole slide images (half benign and half malignant biopsies) were individually shown to 4 pathology residents at four points in time--at the beginning of their first, second, third, and fourth years of residency. Their task was to examine each image and select three areas that they would most want to zoom in on in order to view the diagnostic detail at higher resolution. Eye position was recorded as they scanned each whole slide image at low magnification. The data indicate that with each successive year of experience, the residents' search patterns do change. Overall, with time, it takes significantly less time to view an individual slide and decide where to zoom, significantly fewer fixations are generated overall, and there is less examination of nondiagnostic areas. Essentially, the residents' search becomes much more efficient. These findings are similar to those in radiology, and support the theory that an important aspect of the development of expertise is improved pattern recognition (taking in more information during the initial Gestalt or gist view) as well as improved allocation of attention and visual processing resources. Progression in improvements in visual search strategies was similar, but not identical, for the 4 residents. PMID:22835956

  16. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms.

    PubMed

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H

    2015-06-29

    In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a "fractionable" phenotype model of autism. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves, and up to another 30% manifest elevated levels of autism symptoms. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI;), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS;). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  17. Exploring Visual Search as a Paradigm for Predicting Medication Errors.

    PubMed

    Roque, Nelson; Wright, Timothy; Boot, Walter

    2015-01-01

    According to the FDA, over a million injuries occur each year in the United States due to medication errors. Furthermore, medication management is considered an Instrumental Activity of Daily Living (IADL), and inability to manage one's medications can threaten an individual's independence. These errors occur for a number of reasons, including similar sounding/looking medication names and labels, but also due to pills that look extremely similar. We present initial work looking at whether a visual search paradigm might be used to help predict medication mix-ups. We extracted images of pills from the NIH's Pillbox database, and observers were asked to search for a target pill among similar or dissimilar distractors. Set size was also manipulated (3, 6, 9 items). Results indicated that search slopes may serve as a sensitive and continuous measure of pill confusability. For example, two pills that were extremely similar in terms of color, shape, and size produced relatively steep search slopes (28 ms/item target present, 80ms/item absent), while two pills that were similar in shape and size, but different in color produced parallel search slopes (< 2 ms/item for present and absent conditions). In this case, neither target present nor absent conditions indicated a significant effect of set size (t(27) = -.22, p = .83 and t(27) = .59, p = .56, respectively). Overall, these results indicate potential for this paradigm to be used to study and predict medication errors, and future work will extend findings to take into account changes in acuity and color perception associated with advancing age by testing younger and older adults. Meeting abstract presented at VSS 2015. PMID:26326559

  18. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets.

    PubMed

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2011-06-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., "meow"), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., "meow") of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., "meow") should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects. PMID:20864070

  19. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  20. Pattern Visual Evoked Potentials Elicited by Organic Electroluminescence Screen

    PubMed Central

    Matsumoto, Celso Soiti; Shinoda, Kei; Matsumoto, Harue; Funada, Hideaki; Minoda, Haruka

    2014-01-01

    Purpose. To determine whether organic electroluminescence (OLED) screens can be used as visual stimulators to elicit pattern-reversal visual evoked potentials (p-VEPs). Method. Checkerboard patterns were generated on a conventional cathode-ray tube (S710, Compaq Computer Co., USA) screen and on an OLED (17 inches, 320 × 230 mm, PVM-1741, Sony, Tokyo, Japan) screen. The time course of the luminance changes of each monitor was measured with a photodiode. The p-VEPs elicited by these two screens were recorded from 15 eyes of 9 healthy volunteers (22.0 ± 0.8 years). Results. The OLED screen had a constant time delay from the onset of the trigger signal to the start of the luminescence change. The delay during the reversal phase from black to white for the pattern was 1.0 msec on the cathode-ray tube (CRT) screen and 0.5 msec on the OLED screen. No significant differences in the amplitudes of P100 and the implicit times of N75 and P100 were observed in the p-VEPs elicited by the CRT and the OLED screens. Conclusion. The OLED screen can be used as a visual stimulator to elicit p-VEPs; however the time delay and the specific properties in the luminance change must be taken into account. PMID:25197652

  1. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  2. Age-related changes in conjunctive visual search in children with and without ASD.

    PubMed

    Iarocci, Grace; Armstrong, Kimberly

    2014-04-01

    Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. PMID:24574200

  3. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-01-01

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations. PMID:24190911

  4. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes. PMID:20508452

  5. Spontaneous pattern formation and pinning in the visual cortex

    NASA Astrophysics Data System (ADS)

    Baker, Tanya I.

    Bifurcation theory and perturbation theory can be combined with a knowledge of the underlying circuitry of the visual cortex to produce an elegant story explaining the phenomenon of visual hallucinations. A key insight is the application of an important set of ideas concerning spontaneous pattern formation introduced by Turing in 1952. The basic mechanism is a diffusion driven linear instability favoring a particular wavelength that determines the size of the ensuing stripe or spot periodicity of the emerging spatial pattern. Competition between short range excitation and longer range inhibition in the connectivity profile of cortical neurons provides the difference in diffusion length scales necessary for the Turing mechanism to occur and has been proven by Ermentrout and Cowan to be sufficient to explain the generation of a subset of reported geometric hallucinations. Incorporating further details of the cortical circuitry, namely that neurons are also weakly connected to other neurons sharing a particular stimulus orientation or spatial frequency preference at even longer ranges and the resulting shift-twist symmetry of the neuronal connectivity, improves the story. We expand this approach in order to be able to include the tuned responses of cortical neurons to additional visual stimulus features such as motion, color and disparity. We apply a study of nonlinear dynamics similar to the analysis of wave propagation in a crystalline lattice to demonstrate how a spatial pattern formed through the Turing instability can be pinned to the geometric layout of various feature preferences. The perturbation analysis is analogous to solving the Schrodinger equation in a weak periodic potential. Competition between the local isotropic connections which produce patterns of activity via the Turing mechanism and the weaker patchy lateral connections that depend on a neuron's particular set of feature preferences create long wavelength affects analogous to commensurate-incommensurate transitions found in fluid systems under a spatially periodic driving force. In this way we hope to better understand how the intrinsic architecture of the visual cortex can generate patterns of activity that underlie visual hallucinations.

  6. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  7. Innovative pattern reversal displays for visual electrophysiological studies.

    PubMed

    Toft-Nielsen, J; Bohorquez, J; Ozdamar, O

    2011-01-01

    Pattern Reversal (PR) stimulation is a frequently used tool in the evaluation of the visual pathway. The PR stimulus consists of a field of black and white segments (usually checks or bars) of constant luminance, which change phase (black to white and white to black) at a given reversal rate. The Pattern Electroretinogram (PERG) is a biological potential that is evoked from the retina upon viewing PR display. Likewise, the Pattern Visual Evoked Potential (PVEP) is a biological potential recorded from the occipital cortex when viewing a PR display. Typically, PR stimuli are presented on a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor. This paper presents three modalities to generate pattern reversal stimuli. The three methods are as follows: a display consisting of array of Light Emitting Diodes (LEDs), a display comprised of two miniature projectors, and a display utilizing a modified LCD display in conjunction with a variable polarizer. The proposed stimulators allow for the recording of PERG and PVEP waveforms at much higher rates than are capable with conventional stimulators. Additionally, all three of the alternative PR displays will be able to take advantage of advanced analysis techniques, such as the recently developed Continuous Loop Averaging Deconvolution (CLAD) algorithm. PMID:22254729

  8. Pupil diameter reflects uncertainty in attentional selection during visual search

    PubMed Central

    Geng, Joy J.; Blumenfeld, Zachary; Tyson, Terence L.; Minzenberg, Michael J.

    2015-01-01

    Pupil diameter has long been used as a metric of cognitive processing. However, recent advances suggest that the cognitive sources of change in pupil size may reflect LC-NE function and the calculation of unexpected uncertainty in decision processes (Aston-Jones and Cohen, 2005; Yu and Dayan, 2005). In the current experiments, we explored the role of uncertainty in attentional selection on task-evoked changes in pupil diameter during visual search. We found that task-evoked changes in pupil diameter were related to uncertainty during attentional selection as measured by reaction time (RT) and performance accuracy (Experiments 1-2). Control analyses demonstrated that the results are unlikely to be due to error monitoring or response uncertainty. Our results suggest that pupil diameter can be used as an implicit metric of uncertainty in ongoing attentional selection requiring effortful control processes. PMID:26300759

  9. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order. Thus, in "London Bridge is falling down," "London" and "down" were found no faster than "falling." PMID:25788035

  10. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms

    PubMed Central

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H.; Baron-Cohen, Simon; Bolton, Patrick; Cheung, Celeste; Davies, Kim; Liew, Michelle; Fernandes, Janice; Gammer, Issy; Maris, Helen; Salomone, Erica; Pasco, Greg; Pickles, Andrew; Ribeiro, Helena; Tucker, Leslie

    2015-01-01

    Summary In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception [1]. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils [2–5]. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated [4, 6]. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a “fractionable” phenotype model of autism [7]. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves [8], and up to another 30% manifest elevated levels of autism symptoms [9]. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI; [10]), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS; [11]). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  11. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  12. Visualization of oxygen distribution patterns caused by coral and algae.

    PubMed

    Haas, Andreas F; Gregg, Allison K; Smith, Jennifer E; Abieri, Maria L; Hatay, Mark; Rohwer, Forest

    2013-01-01

    Planar optodes were used to visualize oxygen distribution patterns associated with a coral reef associated green algae (Chaetomorpha sp.) and a hermatypic coral (Favia sp.) separately, as standalone organisms, and placed in close proximity mimicking coral-algal interactions. Oxygen patterns were assessed in light and dark conditions and under varying flow regimes. The images show discrete high oxygen concentration regions above the organisms during lighted periods and low oxygen in the dark. Size and orientation of these areas were dependent on flow regime. For corals and algae in close proximity the 2D optodes show areas of extremely low oxygen concentration at the interaction interfaces under both dark (18.4 ± 7.7 µmol O2 L(- 1)) and daylight (97.9 ± 27.5 µmol O2 L(- 1)) conditions. These images present the first two-dimensional visualization of oxygen gradients generated by benthic reef algae and corals under varying flow conditions and provide a 2D depiction of previously observed hypoxic zones at coral algae interfaces. This approach allows for visualization of locally confined, distinctive alterations of oxygen concentrations facilitated by benthic organisms and provides compelling evidence for hypoxic conditions at coral-algae interaction zones. PMID:23882443

  13. Spatial and temporal dynamics of visual search tasks distinguish subtypes of unilateral spatial neglect: Comparison of two cases with viewer-centered and stimulus-centered neglect.

    PubMed

    Mizuno, Katsuhiro; Kato, Kenji; Tsuji, Tetsuya; Shindo, Keiichiro; Kobayashi, Yukiko; Liu, Meigen

    2016-08-01

    We developed a computerised test to evaluate unilateral spatial neglect (USN) using a touchscreen display, and estimated the spatial and temporal patterns of visual search in USN patients. The results between a viewer-centered USN patient and a stimulus-centered USN patient were compared. Two right-brain-damaged patients with USN, a patient without USN, and 16 healthy subjects performed a simple cancellation test, the circle test, a visuomotor search test, and a visual search test. According to the results of the circle test, one USN patient had stimulus-centered neglect and a one had viewer-centered neglect. The spatial and temporal patterns of these two USN patients were compared. The spatial and temporal patterns of cancellation were different in the stimulus-centered USN patient and the viewer-centered USN patient. The viewer-centered USN patient completed the simple cancellation task, but paused when transferring from the right side to the left side of the display. Unexpectedly, this patient did not exhibit rightward attention bias on the visuomotor and visual search tests, but the stimulus-centered USN patient did. The computer-based assessment system provided information on the dynamic visual search strategy of patients with USN. The spatial and temporal pattern of cancellation and visual search were different across the two patients with different subtypes of neglect. PMID:26059555

  14. Expectations developed over multiple timescales facilitate visual search performance

    PubMed Central

    Gekas, Nikos; Seitz, Aaron R.; Seriès, Peggy

    2015-01-01

    Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level. PMID:26200891

  15. Case role filling as a side effect of visual search

    SciTech Connect

    Marburger, H.; Wahlster, W.

    1983-01-01

    This paper addresses the problem of generating communicatively adequate extended responses in the absence of specific knowledge concerning the intentions of the questioner. The authors formulate and justify a heuristic for the selection of optional deep case slots not contained in the question as candidates for the additional information contained in an extended response. It is shown that, in a visually present domain of discourse, case role filling for the construction of an extended response can be regarded as a side effect of the visual search necessary to answer a question containing a locomotion verb. The paper describes the various representation constructions used in the German language dialog system HAM-ANS for dealing with the semantics of locomotion verbs and illustrates their use in generating extended responses. In particular, it outlines the structure of the geometrical scene description, the representation of events in a logic-oriented semantic representation language, the case-frame lexicon and the representation of the referential semantics based on the flavor system. The emphasis is on a detailed presentation of the application of object-oriented programming methods for coping with the semantics of locomotion verbs. The process of generating an extended response is illustrated by an extensively annotated trace. 13 references.

  16. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback. PMID:26529699

  17. Efficient visual search of videos cast as text retrieval.

    PubMed

    Sivic, Josef; Zisserman, Andrew

    2009-04-01

    We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films 'Groundhog Day', 'Casablanca' and 'Run Lola Run', including searches from within the movie and specified by external images downloaded from the Internet. We investigate retrieval performance with respect to different quantizations of region descriptors and compare the performance of several ranking measures. Performance is also compared to a baseline method implementing standard frame to frame matching. PMID:19229077

  18. Active sensing in the categorization of visual patterns.

    PubMed

    Yang, Scott Cheng-Hsin; Lengyel, Mt; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. PMID:26880546

  19. Visual pattern memory requires foraging function in the central complex of Drosophila

    PubMed Central

    Wang, Zhipeng; Pan, Yufeng; Li, Weizhe; Jiang, Huoqing; Chatzimanolis, Lazaros; Chang, Jianhong; Gong, Zhefeng; Liu, Li

    2008-01-01

    The role of the foraging (for) gene, which encodes a cyclic guanosine-3′,5′-monophosphate (cGMP)-dependent protein kinase (PKG), in food-search behavior in Drosophila has been intensively studied. However, its functions in other complex behaviors have not been well-characterized. Here, we show experimentally in Drosophila that the for gene is required in the operant visual learning paradigm. Visual pattern memory was normal in a natural variant rover (forR) but was impaired in another natural variant sitter (forS), which has a lower PKG level. Memory defects in forS flies could be rescued by either constitutive or adult-limited expression of for in the fan-shaped body. Interestingly, we showed that such rescue also occurred when for was expressed in the ellipsoid body. Additionally, expression of for in the fifth layer of the fan-shaped body restored sufficient memory for the pattern parameter “elevation” but not for “contour orientation,” whereas expression of for in the ellipsoid body restored sufficient memory for both parameters. Our study defines a Drosophila model for further understanding the role of cGMP-PKG signaling in associative learning/memory and the neural circuit underlying this for-dependent visual pattern memory. PMID:18310460

  20. Adaptive two-scale edge detection for visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    2009-09-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvocellular (P) and magnocellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong nonlinear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information and gracefully shifts from small-scale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the P- and M-channels of the human visual system. We also examine the case of adapting to a third image condition-namely, too little visual information-and automatically adjust edge-detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  1. Flow pattern visualization in a mimic anaerobic digester using CFD.

    PubMed

    Vesvikar, Mehul S; Al-Dahhan, Muthanna

    2005-03-20

    Three-dimensional steady-state computational fluid dynamics (CFD) simulations were performed in mimic anaerobic digesters to visualize their flow pattern and obtain hydrodynamic parameters. The mixing in the digester was provided by sparging gas at three different flow rates. The gas phase was simulated with air and the liquid phase with water. The CFD results were first evaluated using experimental data obtained by computer automated radioactive particle tracking (CARPT). The simulation results in terms of overall flow pattern, location of circulation cells and stagnant regions, trends of liquid velocity profiles, and volume of dead zones agree reasonably well with the experimental data. CFD simulations were also performed on different digester configurations. The effects of changing draft tube size, clearance, and shape of the tank bottoms were calculated to evaluate the effect of digester design on its flow pattern. Changing the draft tube clearance and height had no influence on the flow pattern or dead regions volume. However, increasing the draft tube diameter or incorporating a conical bottom design helped in reducing the volume of the dead zones as compared to a flat-bottom digester. The simulations showed that the gas flow rate sparged by a single point (0.5 cm diameter) sparger does not have an appreciable effect on the flow pattern of the digesters at the range of gas flow rates used. PMID:15685599

  2. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  3. Preemption Effects in Visual Search: Evidence for Low-Level Grouping.

    ERIC Educational Resources Information Center

    Rensink, Ronald A.; Enns, James T.

    1995-01-01

    Eight experiments, each with 10 observers in each condition, show that the visual search for Mueller-Lyer stimuli is based on complete configurations rather than component segments with preemption by low-level groups. Results support the view that rapid visual search can only access higher level, more ecologically relevant structures. (SLD)

  4. Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

  5. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness. PMID:20973730

  6. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  7. The Importance of Slow Consistent Movement when Searching for Hard-to-Find Targets in Real-World Visual Search.

    PubMed

    Riggs, Charlotte; Cornes, Katherine; Godwin, Hayward; Guest, Richard; Donnelly, Nick

    2015-01-01

    Various real-world tasks require careful and exhaustive visual search. For example, searching for forensic evidence or signs of hidden threats (what we call hard-to-find targets). Here, we examine how search accuracy for hard-to-find targets is influenced by search behaviour. Participants searched for coins set amongst a 5m x 15m (defined as x and y axes respectively) piece of grassland. The grassland contained natural distractors of leaves and flowers and was not manicured. Coins were visually detectable from standing height. There was no time limit to the task and participants were instructed to search until they were confident they had completed their search. On average, participants detected 45% (SD=23%) of the targets and took 7:23 (SD=4:44) minutes to complete their search. Participants' movement over space and time was recorded as a series of time-stamped x, y coordinates using a Total Station theodolite. To quantify their search behaviour, the x- and y-coordinates of participants' physical locations as they searched the grassland were converted into the frequency domain using a Fourier transform. Decreases in dominant frequencies, a measure of the time before turning during search, resulted in increased response accuracy as well as increased search times. Furthermore, decreases in the number of iterations, defined by the total search time divided by the dominant frequency, also resulted in increased accuracy and search times. Comparing distance between the two most dominant frequency peaks provided a measure of consistency of movement over time. This measure showed that more variable search was associated with slower search times but no improvement in accuracy. Throughout our analyses, these results were true for the y-axis but not the x-axis. At least with respect to the present task, accurate search for hard-to-find targets is dependent on conducting search at a slow consistent speed where changes in direction are minimised. Meeting abstract presented at VSS 2015. PMID:26327043

  8. Visual-auditory integration for visual search: a behavioral study in barn owls

    PubMed Central

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905

  9. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  10. Overcoming hurdles in translating visual search research between the lab and the field.

    PubMed

    Clark, Kait; Cain, Matthew S; Adamo, Stephen H; Mitroff, Stephen R

    2012-01-01

    Research in visual search can be vital to improving performance in careers such as radiology and airport security screening. In these applied, or "field," searches, accuracy is critical, and misses are potentially fatal; however, despite the importance of performing optimally, radiological and airport security searches are nevertheless flawed. Extensive basic research in visual search has revealed cognitive mechanisms responsible for successful visual search as well as a variety of factors that tend to inhibit or improve performance. Ideally, the knowledge gained from such laboratory-based research could be directly applied to field searches, but several obstacles stand in the way of straightforward translation; the tightly controlled visual searches performed in the lab can be drastically different from field searches. For example, they can differ in terms of the nature of the stimuli, the environment in which the search is taking place, and the experience and characteristics of the searchers themselves. The goal of this chapter is to discuss these differences and how they can present hurdles to translating lab-based research to field-based searches. Specifically, most search tasks in the lab entail searching for only one target per trial, and the targets occur relatively frequently, but field searches may contain an unknown and unlimited number of targets, and the occurrence of targets can be rare. Additionally, participants in lab-based search experiments often perform under neutral conditions and have no formal training or experience in search tasks; conversely, career searchers may be influenced by the motivation to perform well or anxiety about missing a target, and they have undergone formal training and accumulated significant experience searching. This chapter discusses recent work that has investigated the impacts of these differences to determine how each factor can influence search performance. Knowledge gained from the scientific exploration of search can be applied to field searches but only when considering and controlling for the differences between lab and field. PMID:23437633

  11. Using visual analytics model for pattern matching in surveillance data

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  12. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between

  13. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  14. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  15. Dynamic Analysis and Pattern Visualization of Forest Fires

    PubMed Central

    Lopes, António M.; Tenreiro Machado, J. A.

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  16. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  17. Tools for visualizing landscape pattern for large geographic areas

    SciTech Connect

    Timmins, S.P.; Hunsaker, C.T.

    1993-10-01

    Landscape pattern can be modelled on a grid with polygons constructed from cells that share edges. Although this model only allows connections in four directions, programming is convenient because both coordinates and attributes take discrete integer values. A typical raster land-cover data set is a multimegabyte matrix of byte values derived by classification of images or gridding of maps. Each matrix may have thousands of raster polygons (patches), many of them islands inside other larger patches. These data sets have complex topology that can overwhelm vector geographic information systems. The goal is to develop tools to quantify change in the landscape structure in terms of the shape and spatial distribution of patches. Three milestones toward this goal are (1) creating polygon topology on a grid, (2) visualizing patches, and (3) analyzing shape and pattern. An efficient algorithm has been developed to locate patches, measure area and perimeter, and establish patch topology. A powerful visualization system with an extensible programming language is used to write procedures to display images and perform analysis.

  18. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. PMID:24051777

  19. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance

  20. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  1. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  2. Electrophysiological measurement of information flow during visual search.

    PubMed

    Cosman, Joshua D; Arita, Jason T; Ianni, Julianna D; Woodman, Geoffrey F

    2016-04-01

    The temporal relationship between different stages of cognitive processing is long debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that, when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy, response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  3. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  4. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    PubMed Central

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  5. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumptionan observers ability to quickly resume a visual search after an interruptionsuggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  6. Task-Dependent Changes in Frontal-Parietal Activation and Connectivity During Visual Search.

    PubMed

    Maximo, Jose O; Neupane, Ajaya; Saxena, Nitesh; Joseph, Robert M; Kana, Rajesh K

    2016-05-01

    Visual search is an important skill in navigating and locating objects (a target) among distractors in our environment. Efficient and faster target detection involves reciprocal interaction between a viewer's attentional resources as well as salient target characteristics. The neural correlates of visual search have been extensively investigated over the last decades, suggesting the involvement of a frontal-parietal network comprising the frontal eye fields (FEFs) and intraparietal sulcus (IPS). In addition, activity and connectivity of these network changes as the visual search become complex and more demanding. The current functional magnetic resonance imaging study examined the modulation of the frontal-parietal network in response to cognitive demand in 22 healthy adult participants. In addition to brain activity, changes in functional connectivity and effective connectivity in this network were examined in response to easy and difficult visual search. Results revealed significantly increased activation in FEF, IPS, and supplementary motor area, more so in difficult search than in easy search. Functional and effective connectivity analyses showed enhanced connectivity in the frontal-parietal network during difficult search and enhanced information transfer from left to right hemisphere during the difficult search process. Our overall findings suggest that cognitive demand significantly increases brain resources across all three measures of brain processing. In sum, we found that goal-directed visual search engages a network of frontal-parietal areas that are modulated in relation to cognitive demand. PMID:26729050

  7. The role of visual pattern persistence in bistable stroboscopic motion.

    PubMed

    Breitmeyer, B G; Ritter, A

    1986-01-01

    Two alternating frames, each consisting of three square elements, were used to study bistable stroboscopic motion percepts. Bistable percepts were obtained which depend on the interstimulus interval (ISI) between the alternating frames. At short ISIs only end-to-end element motion was observed; and at higher ISIs only group motion was perceived. It was found that the progressive ISI-dependent transitions from element to group motion depended on element size and frame duration. These dependencies are predictable from the systematic influence which these variables are known also to exert on visual pattern persistence, indicating that such persistence contributes to determining which precept dominates during bistable stroboscopic motion sequences. These findings bear relevantly on recent attempts to conceptually relate bistable motion percepts to short-range stroboscopic motion processes. PMID:3617522

  8. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. PMID:25695154

  9. Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations

    NASA Astrophysics Data System (ADS)

    Bays, Hillary

    An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

  10. Visual search for features and conjunctions following declines in the useful field of view

    PubMed Central

    Cosman, Joshua D.; Lees, Monica N.; Lee, John D.; Rizzo, Matthew; Vecera, Shaun P.

    2013-01-01

    Background/Study Context Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. Methods The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Results Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. Conclusion The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines. PMID:22830667

  11. Dementia alters standing postural adaptation during a visual search task in older adult men

    PubMed Central

    Joŕdan, Azizah J.; McCarten, J. Riley; Rottunda, Susan; Stoffregen, Thomas A.; Manor, Brad; Wade, Michael G.

    2015-01-01

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance—in the non-dementia group only—suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus appears to disrupt this perception-action synergy. PMID:25770830

  12. Threat modulation of visual search efficiency in PTSD: A comparison of distinct stimulus categories.

    PubMed

    Olatunji, Bunmi O; Armstrong, Thomas; Bilsky, Sarah A; Zhao, Mimi

    2015-10-30

    Although an attentional bias for threat has been implicated in posttraumatic stress disorder (PTSD), the cues that best facilitate this bias are unclear. Some studies utilize images and others utilize facial expressions that communicate threat. However, the comparability of these two types of stimuli in PTSD is unclear. The present study contrasted the effects of images and expressions with the same valence on visual search among veterans with PTSD and controls. Overall, PTSD patients had slower visual search speed than controls. Images caused greater disruption in visual search than expressions, and emotional content modulated this effect with larger differences between images and expressions arising for more negatively valenced stimuli. However, this effect was not observed with the maximum number of items in the search array. Differences in visual search speed by images and expressions significantly varied between PTSD patients and controls for only anger and at the moderate level of task difficulty. Specifically, visual search speed did not significantly differ between PTSD patients and controls when exposed to angry expressions. However, PTSD patients displayed significantly slower visual search than controls when exposed to anger images. The implications of these findings for better understanding emotion modulated attention in PTSD are discussed. PMID:26254798

  13. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  14. Timing of speech and display affects the linguistic mediation of visual search.

    PubMed

    Chiu, Eric M; Spivey, Michael J

    2014-01-01

    Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension. PMID:25154286

  15. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  16. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. PMID:23957930

  17. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  18. Strategies of the honeybee Apis mellifera during visual search for vertical targets presented at various heights: a role for spatial attention?

    PubMed Central

    Morawetz, Linde; Chittka, Lars; Spaethe, Johannes

    2014-01-01

    When honeybees are presented with a colour discrimination task, they tend to choose swiftly and accurately when objects are presented in the ventral part of their frontal visual field. In contrast, poor performance is observed when objects appear in the dorsal part. Here we investigate if this asymmetry is caused by fixed search patterns or if bees can use alternative search mechanisms such as spatial attention, which allows flexible focusing on different areas of the visual field. We asked individual honeybees to choose an orange rewarded target among blue distractors. Target and distractors were presented in the ventral visual field, the dorsal field or both. Bees presented with targets in the ventral visual field consistently had the highest search efficiency, with rapid decisions, high accuracy and direct flight paths. In contrast, search performance for dorsally located targets was inaccurate and slow at the beginning of the test phase, but bees increased their search performance significantly after a few learning trials: they found the target faster, made fewer errors and flew in a straight line towards the target. However, bees needed thrice as long to improve the search for a dorsally located target when the target’s position changed randomly between the ventral and the dorsal visual field. We propose that honeybees form expectations of the location of the target’s appearance and adapt their search strategy accordingly. Different possible mechanisms of this behavioural adaptation are discussed. PMID:25254109

  19. Different predictors of multiple-target search accuracy between nonprofessional and professional visual searchers.

    PubMed

    Biggs, Adam T; Mitroff, Stephen R

    2014-01-01

    Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy. PMID:24266390

  20. Development of a flow visualization apparatus. [to study convection flow patterns

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.

    1975-01-01

    The use of an optical flow visualization device for studying convection flow patterns was investigated. The investigation considered use of a shadowgraph, schlieren and other means for visualizing the flow. A laboratory model was set up to provide data on the proper optics and photography procedures to best visualize the flow. A preliminary design of a flow visualization system is provided as a result of the study. Recommendations are given for a flight test program utilizing the flow visualization apparatus.

  1. Dynamic Modulation of Local Population Activity by Rhythm Phase in Human Occipital Cortex During a Visual Search Task

    PubMed Central

    Miller, Kai J.; Hermes, Dora; Honey, Christopher J.; Sharma, Mohit; Rao, Rajesh P. N.; den Nijs, Marcel; Fetz, Eberhard E.; Sejnowski, Terrence J.; Hebb, Adam O.; Ojemann, Jeffrey G.; Makeig, Scott; Leuthardt, Eric C.

    2010-01-01

    Brain rhythms are more than just passive phenomena in visual cortex. For the first time, we show that the physiology underlying brain rhythms actively suppresses and releases cortical areas on a second-to-second basis during visual processing. Furthermore, their influence is specific at the scale of individual gyri. We quantified the interaction between broadband spectral change and brain rhythms on a second-to-second basis in electrocorticographic (ECoG) measurement of brain surface potentials in five human subjects during a visual search task. Comparison of visual search epochs with a blank screen baseline revealed changes in the raw potential, the amplitude of rhythmic activity, and in the decoupled broadband spectral amplitude. We present new methods to characterize the intensity and preferred phase of coupling between broadband power and band-limited rhythms, and to estimate the magnitude of rhythm-to-broadband modulation on a trial-by-trial basis. These tools revealed numerous coupling motifs between the phase of low-frequency (δ, θ, α, β, and γ band) rhythms and the amplitude of broadband spectral change. In the θ and β ranges, the coupling of phase to broadband change is dynamic during visual processing, decreasing in some occipital areas and increasing in others, in a gyrally specific pattern. Finally, we demonstrate that the rhythms interact with one another across frequency ranges, and across cortical sites. PMID:21119778

  2. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. PMID:25867615

  3. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  4. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  5. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity

  6. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  7. Person perception informs understanding of cognition during visual search.

    PubMed

    Brennan, Allison A; Watson, Marcus R; Kingstone, Alan; Enns, James T

    2011-08-01

    Does person perception--the impressions we form from watching others--hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers' video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive. PMID:21626239

  8. Visual Search is Guided to Categorically Defined Targets

    PubMed Central

    Yang, Hyejin; Zelinsky, Gregory J.

    2009-01-01

    To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class. PMID:19500615

  9. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects. PMID:25262317

  10. Pattern visual evoked potential (PVEP) evaluation in hypothyroidism.

    PubMed

    Nazliel, B; Akbay, E; Irkeç, C; Yetkin, I; Ersoy, R; Törüner, F

    2002-12-01

    Dysfunction of the central nervous system (CNS) is an important consequence of thyroid hormone deficiency. Evoked potentials like visual evoked potentials (VEP) provide a reliable and objective measure of function in related sensory system and tracts. In this study pattern-shift VEP (PVEP) recordings were performed on 48 newly diagnosed hypothyroid patients. Twenty-four had sub-clinical and 24 had overt hypothyroidism. None of the patients had clinical symptoms or signs referable to CNS dysfunction. Their mean age was 44+/-12 yr. The response to pattern stimulation on the normal control subjects was a triphasic response with a prominent positive wave (P100) with a peak latency of 84-105 (mean: 96+/-4) milliseconds (ms). In patients with hypothyroidism mean P100 latency was (mean: 97+/-6) ms and the difference between the 2 groups was not statistically significant. (p>0.05) Delays above the average latency +/-2.5 SD of the mean of the control subjects was defined as a criteria for an abnormality. According to defined criteria 6 (12.5%) patients demonstrated abnormal PVEP at least on one tested side. Previous studies conducted on small patient populations stated there is high percentage of VEP abnormalities in hypothyroid patients. However, this fact was not confirmed by our study. We believe abnormalities of PVEP will be more prominent in untreated patients in the advanced stage of the disease, or in patients who have a neurological involvement; such as apathy, impaired memory or cerebellar dysfunction. Consecutive studies, in a more clearly defined and selected patient population, are needed to confirm and settle this issue. PMID:12553554

  11. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  12. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  13. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  14. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  15. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times

  16. Mouse visual neocortex supports multiple stereotyped patterns of microcircuit activity.

    PubMed

    Sadovsky, Alexander J; MacLean, Jason N

    2014-06-01

    Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701

  17. Mouse Visual Neocortex Supports Multiple Stereotyped Patterns of Microcircuit Activity

    PubMed Central

    Sadovsky, Alexander J.

    2014-01-01

    Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701

  18. Common Visual Pattern Discovery via Nonlinear Mean Shift Clustering.

    PubMed

    Wang, Linbo; Tang, Dong; Guo, Yanwen; Do, Minh N

    2015-12-01

    Discovering common visual patterns (CVPs) from two images is a challenging task due to the geometric and photometric deformations as well as noises and clutters. The problem is generally boiled down to recovering correspondences of local invariant features, and the conventionally addressed by graph-based quadratic optimization approaches, which often suffer from high computational cost. In this paper, we propose an efficient approach by viewing the problem from a novel perspective. In particular, we consider each CVP as a common object in two images with a group of coherently deformed local regions. A geometric space with matrix Lie group structure is constructed by stacking up transformations estimated from initially appearance-matched local interest region pairs. This is followed by a mean shift clustering stage to group together those close transformations in the space. Joining regions associated with transformations of the same group together within each input image forms two large regions sharing similar geometric configuration, which naturally leads to a CVP. To account for the non-Euclidean nature of the matrix Lie group, mean shift vectors are derived in the corresponding Lie algebra vector space with a newly provided effective distance measure. Extensive experiments on single and multiple common object discovery tasks as well as near-duplicate image retrieval verify the robustness and efficiency of the proposed approach. PMID:26415176

  19. Visualization of flow patterning in high-speed centrifugal microfluidics

    NASA Astrophysics Data System (ADS)

    Grumann, Markus; Brenner, Thilo; Beer, Christian; Zengerle, Roland; Ducrée, Jens

    2005-02-01

    This work presents a new experimental setup for image capturing of centrifugally driven flows in disk-based microchannels rotating at high frequencies of up to 150Hz. To still achieve a micron-scale resolution, smearing effects are minimized by a microscope-mounted CCD camera featuring an extremely short minimum exposure time of 100ns, only. The image capture is controlled by a real-time PC board which sends delayed trigger signals to the CCD camera and to a stroboscopic flash upon receiving the zero-crossing signal of the rotating disk. The common delay of the trigger signals is electronically adjusted according to the spinning frequency. This appreciably improves the stability of the captured image sequences. Another computer is equipped with a fast framegrabber PC board to directly acquire the image data from the CCD camera. A maximum spatial resolution ranging between 4.5μm at rest and 10μm at a 150Hz frequency of rotation is achieved. Even at high frequencies of rotation, image smearing does not significantly impair the contrast. Using this experimental setup, the Coriolis-induced patterning of two liquid flows in 300-μm-wide channels rotating at 100Hz is visualized at a spatial resolution better than 10μm.

  20. A pyramidal neural network for visual pattern recognition.

    PubMed

    Phung, Son Lam; Bouzerdoum, Abdesselam

    2007-03-01

    In this paper, we propose a new neural architecture for classification of visual patterns that is motivated by the two concepts of image pyramids and local receptive fields. The new architecture, called pyramidal neural network (PyraNet), has a hierarchical structure with two types of processing layers: Pyramidal layers and one-dimensional (1-D) layers. In the new network, nonlinear two-dimensional (2-D) neurons are trained to perform both image feature extraction and dimensionality reduction. We present and analyze five training methods for PyraNet [gradient descent (GD), gradient descent with momentum, resilient back-propagation (RPROP), Polak-Ribiere conjugate gradient (CG), and Levenberg-Marquadrt (LM)] and two choices of error functions [mean-square-error (mse) and cross-entropy (CE)]. In this paper, we apply PyraNet to determine gender from a facial image, and compare its performance on the standard facial recognition technology (FERET) database with three classifiers: The convolutional neural network (NN), the k-nearest neighbor (k-NN), and the support vector machine (SVM). PMID:17385623

  1. Use of an augmented-vision device for visual search by patients with tunnel vision

    PubMed Central

    Luo, Gang; Peli, Eli

    2006-01-01

    Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8 to 11 wide) carried out the search over a 9074 area, and nine subjects (VF: 7 to 16 wide) over a 6652 area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10 in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136

  2. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better

  3. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  4. Attributes of subtle cues for facilitating visual search in augmented reality.

    PubMed

    Lu, Weiquan; Duh, Henry Been-Lirn; Feiner, Steven; Zhao, Qi

    2014-03-01

    Goal-oriented visual search is performed when a person intentionally seeks a target in the visual environment. In augmented reality (AR) environments, visual search can be facilitated by augmenting virtual cues in the person's field of view. Traditional use of explicit AR cues can potentially degrade visual search performance due to the creation of distortions in the scene. An alternative to explicit cueing, known as subtle cueing, has been proposed as a clutter-neutral method to enhance visual search in video-see-through AR. However, the effects of subtle cueing are still not well understood, and more research is required to determine the optimal methods of applying subtle cueing in AR. We performed two experiments to investigate the variables of scene clutter, subtle cue opacity, size, and shape on visual search performance. We introduce a novel method of experimentally manipulating the scene clutter variable in a natural scene while controlling for other variables. The findings provide supporting evidence for the subtlety of the cue, and show that the clutter conditions of the scene can be used both as a global classifier, as well as a local performance measure. PMID:24434221

  5. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  6. Pop-out in visual search of moving targets in the archer fish.

    PubMed

    Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen

    2015-01-01

    Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates. PMID:25753807

  7. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial

  8. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  9. Effects of targets embedded within words in a visual search task

    PubMed Central

    Grabbe, Jeremy W.

    2014-01-01

    Visual search performance can be negatively affected when both targets and distracters share a dimension relevant to the task. This study examined if visual search performance would be influenced by distracters that affect a dimension irrelevant from the task. In Experiment 1 within the letter string of a letter search task, target letters were embedded within a word. Experiment 2 compared targets embedded in words to targets embedded in nonwords. Experiment 3 compared targets embedded in words to a condition in which a word was present in a letter string, but the target letter, although in the letter string, was not embedded within the word. The results showed that visual search performance was negatively affected when a target appeared within a high frequency word. These results suggest that the interaction and effectiveness of distracters is not merely dependent upon common features of the target and distracters, but can be affected by word frequency (a dimension not related to the task demands). PMID:24855497

  10. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  11. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  12. Theta burst stimulation improves overt visual search in spatial neglect independently of attentional load.

    PubMed

    Cazzoli, Dario; Rosenthal, Clive R; Kennard, Christopher; Zito, Giuseppe A; Hopfner, Simone; Müri, René M; Nyffeler, Thomas

    2015-12-01

    Visual neglect is considerably exacerbated by increases in visual attentional load. These detrimental effects of attentional load are hypothesised to be dependent on an interplay between dysfunctional inter-hemispheric inhibitory dynamics and load-related modulation of activity in cortical areas such as the posterior parietal cortex (PPC). Continuous Theta Burst Stimulation (cTBS) over the contralesional PPC reduces neglect severity. It is unknown, however, whether such positive effects also operate in the presence of the detrimental effects of heightened attentional load. Here, we examined the effects of cTBS on neglect severity in overt visual search (i.e., with eye movements), as a function of high and low visual attentional load conditions. Performance was assessed on the basis of target detection rates and eye movements, in a computerised visual search task and in two paper-pencil tasks. cTBS significantly ameliorated target detection performance, independently of attentional load. These ameliorative effects were significantly larger in the high than the low load condition, thereby equating target detection across both conditions. Eye movement analyses revealed that the improvements were mediated by a redeployment of visual fixations to the contralesional visual field. These findings represent a substantive advance, because cTBS led to an unprecedented amelioration of overt search efficiency that was independent of visual attentional load. PMID:26547867

  13. Computational assessment of visual search strategies in volumetric medical images.

    PubMed

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning." PMID:26759815

  14. Sensitive tint visualization of resonance patterns in glass plate

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ken; Izuno, Kana; Aoyanagi, Masafumi

    2012-05-01

    Photoelastic visualization can be used to establish vibrational modes of solid transparent materials having complicated longitudinal and shear strains. On the other hand, determining the sign of a stress field by the photoelastic visualization is difficult. Color visualization of resonance vibrational modes of a glass plate by using stroboscopic photoelasticity with a sensitive tint plate is described. This technique enables to determine the sign of the stress in acoustic fields.

  15. A ground-like surface facilitates visual search in chimpanzees (Pan troglodytes)

    PubMed Central

    Imura, Tomoko; Tomonaga, Masaki

    2013-01-01

    Ground surfaces play an important role in terrestrial species' locomotion and ability to manipulate objects. In humans, ground surfaces have been found to offer significant advantages in distance perception and visual-search tasks (“ground dominance”). The present study used a comparative perspective to investigate the ground-dominance effect in chimpanzees, a species that spends time both on the ground and in trees. During the experiments chimpanzees and humans engaged in a search for a cube on a computer screen; the target cube was darker than other cubes. The search items were arranged on a ground-like or ceiling-like surface, which was defined by texture gradients and shading. The findings indicate that a ground-like, but not a ceiling-like, surface facilitated the search for a difference in luminance among both chimpanzees and humans. Our findings suggest the operation of a ground-dominance effect on visual search in both species. PMID:23917381

  16. A ground-like surface facilitates visual search in chimpanzees (Pan troglodytes).

    PubMed

    Imura, Tomoko; Tomonaga, Masaki

    2013-01-01

    Ground surfaces play an important role in terrestrial species' locomotion and ability to manipulate objects. In humans, ground surfaces have been found to offer significant advantages in distance perception and visual-search tasks ("ground dominance"). The present study used a comparative perspective to investigate the ground-dominance effect in chimpanzees, a species that spends time both on the ground and in trees. During the experiments chimpanzees and humans engaged in a search for a cube on a computer screen; the target cube was darker than other cubes. The search items were arranged on a ground-like or ceiling-like surface, which was defined by texture gradients and shading. The findings indicate that a ground-like, but not a ceiling-like, surface facilitated the search for a difference in luminance among both chimpanzees and humans. Our findings suggest the operation of a ground-dominance effect on visual search in both species. PMID:23917381

  17. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills. PMID:23460295

  18. Predicting search time in visually cluttered scenes using the fuzzy logic approach

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Sohn, Euijung; Singh, Harpreet; Elgarhi, Abdelakrim; Nam, Deok H.

    2001-09-01

    The mean search time of observers searching for targets in visual scenes with clutter is computed using the fuzzy logic approach (FLA). The FLA is presented as a robust method for the computational of search times and/or probabilities of detection for treated vehicles. The Mamdani/Assilian and Sugeno models have been investigated and are compared. The Search_2 dataset from TNO is used to build and validate the fuzzy logic approach for target detection modeling. The input parameters are: local luminance, range, aspect, width, and wavelet edge points, and the single output is search time. The Mamdani/Assilian model gave predicted mean search times for data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method, then modeled using the FLA, and results are compared to experiment.

  19. Supplementary eye field during visual search: salience, cognitive control, and performance monitoring.

    PubMed

    Purcell, Braden A; Weigand, Pauline K; Schall, Jeffrey D

    2012-07-25

    How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFPs) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only ∼2% of SEF neurons and ∼8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFPs exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance. PMID:22836261

  20. Acute exercise and aerobic fitness influence selective attention during visual search.

    PubMed

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

  1. Compensatory strategies following visual search training in patients with homonymous hemianopia: an eye movement study

    PubMed Central

    Pambakian, Alidz L. M.; Kennard, Christopher

    2010-01-01

    A total of 29 patients with homonymous visual field defects without neglect practised visual search in 20 daily sessions, over a period of 4 weeks. Patients searched for a single randomly positioned target amongst distractors displayed for 3 s. After training patients demonstrated significantly shorter reaction times for search stimuli (Pambakian et al. in J Neurol Neurosurg Psychiatry 75:1443–1448, 2004). In this study, patients achieved improved search efficiency after training by altering their oculomotor behaviour in the following ways: (1) patients directed a higher proportion of fixations into the hemispace containing the target, (2) patients were quicker to saccade into the hemifield containing the target if the initial saccade had been made into the opposite hemifield, (3) patients made fewer transitions from one hemifield to another before locating the target, (4) patients made a larger initial saccade, although the direction of the initial saccade did not change as a result of training, (5) patients acquired a larger visual lobe in their blind hemifield after training. Patients also required fewer saccades to locate the target after training reflecting improved search efficiency. All these changes were confined to the training period and maintained at follow-up. Taken together these results suggest that visual training facilitates the development of specific compensatory eye movement strategies in patients with homonymous visual field defects. PMID:20556413

  2. The effects of distractors and spatial precues on covert visual search in macaque.

    PubMed

    Lee, Byeong-Taek; McPeek, Robert M

    2013-01-14

    Covert visual search has been studied extensively in humans, and has been used as a tool for understanding visual attention and cueing effects. In contrast, much less is known about covert search performance in monkeys, despite the fact that much of our understanding of the neural mechanisms of attention is based on these animals. In this study, we characterize the covert visual search performance of monkeys by training them to discriminate the orientation of a briefly-presented, peripheral Landolt-C target embedded within an array of distractor stimuli while maintaining fixation. We found that target discrimination performance declined steeply as the number of distractors increased when the target and distractors were of the same color, but not when the target was an odd color (color pop-out). Performance was also strongly affected by peripheral spatial precues presented before target onset, with better performance seen when the precue coincided with the target location (valid precue) than when it did not (invalid precue). Moreover, the effectiveness of valid precues was greatest when the delay between precue and target was short (∼80-100 ms), and gradually declined with longer delays, consistent with a transient component to the cueing effect. Discrimination performance was also significantly affected by prior knowledge of the target location in the absence of explicit visual precues. These results demonstrate that covert visual search performance in macaques is very similar to that of humans, indicating that the macaque provides an appropriate model for understanding the neural mechanisms of covert search. PMID:23099048

  3. Supplementary eye field during visual search: Salience, cognitive control, and performance monitoring

    PubMed Central

    Purcell, Braden A.; Weigand, Pauline K.; Schall, Jeffrey D.

    2012-01-01

    How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFP) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only ~2% of SEF neurons and ~8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFP exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance. PMID:22836261

  4. Searching for Meaning: Visual Culture from an Anthropological Perspective

    ERIC Educational Resources Information Center

    Stokrocki, Mary

    2006-01-01

    In this article, the author discusses the importance of Viktor Lowenfeld's influence on her research, describes visual anthropology, gives examples of her research, and examines the implications of this type of research for teachers. The author regards Lowenfeld's (1952/1939) early work with children in Austria as a form of participant observation…

  5. Learning to Recognize Patterns: Changes in the Visual Field with Familiarity

    NASA Astrophysics Data System (ADS)

    Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo

    1995-01-01

    Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].

  6. Searching the Visual Arts: An Analysis of Online Information Access.

    ERIC Educational Resources Information Center

    Brady, Darlene; Serban, William

    1981-01-01

    A search for stained glass bibliographic information using DIALINDEX identified 57 DIALOG files from a variety of subject categories and 646 citations as relevant. Files include applied science, biological sciences, chemistry, engineering, environment/pollution, people, business research, and public affairs. Eleven figures illustrate the search…

  7. Visualizing Document Classification: A Search Aid for the Digital Library.

    ERIC Educational Resources Information Center

    Lieu, Yew-Huey; Dantzig, Paul; Sachs, Martin; Corey, James T.; Hinnebusch, Mark T.; Damashek, Marc; Cohen, Jonathan

    2000-01-01

    Discusses access to digital libraries on the World Wide Web via Web browsers and describes the design of a language-independent document classification system to help users of the Florida Center for Library Automation analyze search query results. Highlights include similarity scores, clustering, graphical representation of document similarity,…

  8. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention

  9. Gene prediction by pattern recognition and homology search

    SciTech Connect

    Xu, Y.; Uberbacher, E.C.

    1996-05-01

    This paper presents an algorithm for combining pattern recognition-based exon prediction and database homology search in gene model construction. The goal is to use homologous genes or partial genes existing in the database as reference models while constructing (multiple) gene models from exon candidates predicted by pattern recognition methods. A unified framework for gene modeling is used for genes ranging from situations with strong homology to no homology in the database. To maximally use the homology information available, the algorithm applies homology on three levels: (1) exon candidate evaluation, (2) gene-segment construction with a reference model, and (3) (complete) gene modeling. Preliminary testing has been done on the algorithm. Test results show that (a) perfect gene modeling can be expected when the initial exon predictions are reasonably good and a strong homology exists in the database; (b) homology (not necessarily strong) in general helps improve the accuracy of gene modeling; (c) multiple gene modeling becomes feasible when homology exists in the database for the involved genes.

  10. Training older adults to search more effectively: scanning strategy and visual search in dynamic displays.

    PubMed

    Becic, Ensar; Boot, Walter R; Kramer, Arthur F

    2008-06-01

    The authors examined the ability of older adults to modify their search strategies to detect changes in dynamic displays. Older adults who made few eye movements during search (i.e., covert searchers) were faster and more accurate compared with individuals who made many eye movements (i.e., overt searchers). When overt searchers were instructed to adopt a covert search strategy, target detection performance increased to the level of natural covert searchers. Similarly, covert searchers instructed to search overtly exhibited a decrease in target detection performance. These data suggest that with instructions and minimal practice, older adults can ameliorate the cost of a poor search strategy. PMID:18573020

  11. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  12. Climate and colored walls: in search of visual comfort

    NASA Astrophysics Data System (ADS)

    Arrarte-Grau, Malvina

    2002-06-01

    The quality of natural light, the landscape surrounds and the techniques of construction are important factors in the selection of architectural colors. Observation of exterior walls in differentiated climates allows the recognition of particularities in the use of color which satisfy the need for visual comfort. At a distance of 2000 kilometers along the coast of Peru, Lima and Mancora at 12° and 4° respectively, are well defined for their climatic characteristics: in Mancora sunlight causes high reflection, in Lima overcast sky and high humidity cause glare. The study of building color effects at these locations serves to illustrate that color values may be controlled in order to achieve visual comfort and contribute to color identity.

  13. Looking and listening: A comparison of intertrial repetition effects in visual and auditory search tasks.

    PubMed

    Klein, Michael D; Stolz, Jennifer A

    2015-08-01

    Previous research shows that performance on pop-out search tasks is facilitated when the target and distractors repeat across trials compared to when they switch. This phenomenon has been shown for many different types of visual stimuli. We tested whether the effect would extend beyond visual stimuli to the auditory modality. Using a temporal search task that has previously been shown to elicit priming of pop-out with visual stimuli (Yashar & Lamy, Psychological Science, 21(2), 243-251, 2010), we showed that priming of pop-out does occur with auditory stimuli and has characteristics similar to those of an analogous visual task. These results suggest that either the same or similar mechanisms might underlie priming of pop-out in both modalities. PMID:25944447

  14. Abnormal early brain responses during visual search are evident in schizophrenia but not bipolar affective disorder.

    PubMed

    VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R

    2016-01-01

    People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. PMID:26603466

  15. Visual Servoing: A technology in search of an application

    SciTech Connect

    Feddema, J.T.

    1994-05-01

    Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.

  16. Raster-based visualization of abnormal association patterns in marine environments

    NASA Astrophysics Data System (ADS)

    Li, Lianwei; Xue, Cunjin; Liu, Jian; Wang, Zhenjie; Qin, Lijuan

    2014-01-01

    The visualization in a single view of abnormal association patterns obtained from mining lengthy marine raster datasets presents a great challenge for traditional visualization techniques. On the basis of the representation model of marine abnormal association patterns, an interactive visualization framework is designed with three complementary components: three-dimensional pie charts, two-dimensional variation maps, and triple-layer mosaics; the details of their implementation steps are given. The combination of the three components allows users to request visualization of the association patterns from global to detailed scales. The three-dimensional pie chart component visualizes the locations where more marine environmental parameters are interrelated and shows the parameters that are involved. The two-dimensional variation map component gives the spatial distribution of interactions between each marine environmental parameter and other parameters. The triple-layer mosaics component displays the detailed association patterns at locations specified by the users. Finally, the effectiveness and the efficiency of the proposed visualization framework are demonstrated using a prototype system with three visualization interfaces based on ArcEngine 10.0, and the abnormal association patterns among marine environmental parameters in the Pacific Ocean are visualized.

  17. Visual search in typically developing toddlers and toddlers with Fragile X or Williams syndrome.

    PubMed

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-02-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between successively touched items, accuracy and error types revealed changes in 2- and 3-year-olds' vulnerability to manipulations of the search display. Experiment 2 examined search performance by toddlers with Fragile X syndrome (FXS) or Williams syndrome (WS). Both of these groups produced equivalent mean time and distance per touch as typically developing toddlers matched by chronological or mental age; but both produced a larger number of errors. Toddlers with WS confused distractors with targets more than the other groups; while toddlers with FXS perseverated on previously found targets. These findings provide information on how visual search typically develops in toddlers, and reveal distinct search deficits for atypically developing toddlers. PMID:15323123

  18. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers. PMID:25678271

  19. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  20. Memory for found targets interferes with subsequent performance in multiple-target visual search.

    PubMed

    Cain, Matthew S; Mitroff, Stephen R

    2013-10-01

    Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PMID:23163788

  1. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  2. Binocular saccade coordination in reading and visual search: a developmental study in typical reader and dyslexic children

    PubMed Central

    Seassau, Magali; Gérard, Christophe Loic; Bui-Quoc, Emmanuel; Bucci, Maria Pia

    2014-01-01

    Studies dealing with developmental aspects of binocular eye movement behavior during reading are scarce. In this study we have explored binocular strategies during reading and visual search tasks in a large population of dyslexic and typical readers. Binocular eye movements were recorded using a video-oculography system in 43 dyslexic children (aged 8–13) and in a group of 42 age-matched typical readers. The main findings are: (i) ocular motor characteristics of dyslexic children are impaired in comparison to those reported in typical children in reading task; (ii) a developmental effect exists in reading in control children, in dyslexic children the effect of development was observed only on fixation durations; and (iii) ocular motor behavior in the visual search tasks is similar for dyslexic children and for typical readers, except for the disconjugacy during and after the saccade: dyslexic children are impaired in comparison to typical children. Data reported here confirms and expands previous studies on children’s reading. Both reading skills and binocular saccades coordination improve with age in typical readers. The atypical eye movement’s patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an impairment of the ocular motor saccade and vergence systems interaction. PMID:25400559

  3. The right hemisphere is dominant in organization of visual search-A study in stroke patients.

    PubMed

    Ten Brink, Antonia F; Matthijs Biesbroek, J; Kuijf, Hugo J; Van der Stigchel, Stefan; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2016-05-01

    Cancellation tasks are widely used for diagnosis of lateralized attentional deficits in stroke patients. A disorganized fashion of target cancellation has been hypothesized to reflect disturbed spatial exploration. In the current study we aimed to examine which lesion locations result in disorganized visual search during cancellation tasks, in order to determine which brain areas are involved in search organization. A computerized shape cancellation task was administered in 78 stroke patients. As an index for search organization, the amount of intersections of paths between consecutive crossed targets was computed (i.e., intersections rate). This measure is known to accurately depict disorganized visual search in a stroke population. Ischemic lesions were delineated on CT or MRI images. Assumption-free voxel-based lesion-symptom mapping and region of interest-based analyses were used to determine the grey and white matter anatomical correlates of the intersections rate as a continuous measure. The right lateral occipital cortex, superior parietal lobule, postcentral gyrus, superior temporal gyrus, middle temporal gyrus, supramarginal gyrus, inferior longitudinal fasciculus, first branch of the superior longitudinal fasciculus (SLF I), and the inferior fronto-occipital fasciculus, were related to search organization. To conclude, a clear right hemispheric dominance for search organization was revealed. Further, the correlates of disorganized search overlap with regions that have previously been associated with conjunctive search and spatial working memory. This suggests that disorganized visual search is caused by disturbed spatial processes, rather than deficits in high level executive function or planning, which would be expected to be more related to frontal regions. PMID:26876010

  4. Inter-trial priming does not affect attentional priority in asymmetric visual search

    PubMed Central

    Amunts, Liana; Yashar, Amit; Lamy, Dominique

    2014-01-01

    Visual search is considerably speeded when the target's characteristics remain constant across successive selections. Here, we investigated whether such inter-trial priming increases the target's attentional priority, by examining whether target repetition reduces search efficiency during serial search. As the study of inter-trial priming requires the target and distractors to exchange roles unpredictably, it has mostly been confined to singleton searches, which typically yield efficient search. We therefore resorted to two singleton searches known to yield relatively inefficient performance, that is, searches in which the target does not pop out. Participants searched for a veridical angry face among neutral ones or vice-versa, either upright or inverted (Experiment 1) or for a Q among Os or vice-versa (Experiment 2). In both experiments, we found substantial intertrial priming that did not improve search efficiency. In addition, intertrial priming was asymmetric and occurred only when the more salient target repeated. We conclude that intertrial priming does not modulate attentional priority allocation and that it occurs in asymmetric search only when the target is characterized by an additional feature that is consciously perceived. PMID:25221536

  5. Contrasting vertical and horizontal representations of affect in emotional visual search.

    PubMed

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings. PMID:26106061

  6. The Mouse Model of Down Syndrome Ts65Dn Presents Visual Deficits as Assessed by Pattern Visual Evoked Potentials

    PubMed Central

    Scott-McKean, Jonah Jacob; Chang, Bo; Hurd, Ronald E.; Nusinowitz, Steven; Schmidt, Cecilia; Davisson, Muriel T.

    2010-01-01

    Purpose. The Ts65Dn mouse is the most complete widely available animal model of Down syndrome (DS). Quantitative information was generated about visual function in the Ts65Dn mouse by investigating their visual capabilities by means of electroretinography (ERG) and patterned visual evoked potentials (pVEPs). Methods. pVEPs were recorded directly from specific regions of the binocular visual cortex of anesthetized mice in response to horizontal sinusoidal gratings of different spatial frequency, contrast, and luminance generated by a specialized video card and presented on a 21-in. computer display suitably linearized by gamma correction. Results. ERG assessments indicated no significant deficit in retinal physiology in Ts65Dn mice compared with euploid control mice. The Ts65Dn mice were found to exhibit deficits in luminance threshold, spatial resolution, and contrast threshold, compared with the euploid control mice. The behavioral counterparts of these parameters are luminance sensitivity, visual acuity, and the inverse of contrast sensitivity, respectively. Conclusions. DS includes various phenotypes associated with the visual system, including deficits in visual acuity, accommodation, and contrast sensitivity. The present study provides electrophysiological evidence of visual deficits in Ts65Dn mice that are similar to those reported in persons with DS. These findings strengthen the role of the Ts65Dn mouse as a model for DS. Also, given the historical assumption of integrity of the visual system in most behavioral assessments of Ts65Dn mice, such as the hidden-platform component of the Morris water maze, the visual deficits described herein may represent a significant confounding factor in the interpretation of results from such experiments. PMID:20130276

  7. Searching for a major locus for male pattern baldness (MPB)

    SciTech Connect

    Anker, R.; Eisen, A.Z.; Donis-Keller, H.

    1994-09-01

    Male pattern baldness (MPB) is a common trait in post-pubertal males. Approximately 50% of adult males present some degree of MPB by age 50. According to the classification provided by Hamilton in 1951 and modified by Norwood in 1975, the trait itself is a continuum that ranges from mild (Type I) to severe (Type VII) cases. In addition, there is extensive variability for the age of onset. The role of androgens in allowing the expression of this trait in males has been well established. This phenotype is uncommonly expressed in females. The high prevalence of the trait, the distribution of MPB as a continuous trait, and several non-allelic mutations identified in the mouse capable of affecting hair pattern, suggest that MPB is genetically heterogeneous. In order to reduce the probability of multiple non-allelic MPB genes within a pedigree, we selected 9 families in which MPB appears to segregate exclusively through the paternal lineage as compared to bilineal pedigrees. There are 32 males expressing this phenotype and females are treated as phenotype unknown. In general, affected individuals expressed the trait before 30 years of age with a severity of at least Type III or IV. We assumed an autosomal dominant model, with a gene frequency of 1/20 for the affected allele, and 90% penetrance. Simulation studies using the SLINK program with these pedigrees showed that these families would be sufficient to detect linkage under the assumption of a single major locus. If heterogeneity is present, the current resource does not have sufficient power to detect linkage at a statistically significant level, although candidate regions of the genome could be identified for further studies with additional pedigrees. Using 53 highly informative microsatellite markers, and a subset of 7 families, we have screened 30% of the genome. This search included several regions where candidate genes for MPB are located.

  8. Visual Intelligence: Using the Deep Patterns of Visual Language to Build Cognitive Skills

    ERIC Educational Resources Information Center

    Sibbet, David

    2008-01-01

    Thirty years of work as a graphic facilitator listening visually to people in every kind of organization has convinced the author that visual intelligence is a key to navigating an information economy rich with multimedia. He also believes that theory and disciplines developed by practitioners in this new field hold special promise for educators…

  9. Reward association facilitates distractor suppression in human visual search.

    PubMed

    Gong, Mengyuan; Yang, Feitong; Li, Sheng

    2016-04-01

    Although valuable objects are attractive in nature, people often encounter situations where they would prefer to avoid such distraction while focusing on the task goal. Contrary to the typical effect of attentional capture by a reward-associated item, we provide evidence for a facilitation effect derived from the active suppression of a high reward-associated stimulus when cuing its identity as distractor before the display of search arrays. Selection of the target is shown to be significantly faster when the distractors were in high reward-associated colour than those in low reward-associated or non-rewarded colours. This behavioural reward effect was associated with two neural signatures before the onset of the search display: the increased frontal theta oscillation and the strengthened top-down modulation from frontal to anterior temporal regions. The former suggests an enhanced working memory representation for the reward-associated stimulus and the increased need for cognitive control to override Pavlovian bias, whereas the latter indicates that the boost of inhibitory control is realized through a frontal top-down mechanism. These results suggest a mechanism in which the enhanced working memory representation of a reward-associated feature is integrated with task demands to modify attentional priority during active distractor suppression and benefit behavioural performance. PMID:26797805

  10. Exploration on Building of Visualization Platform to Innovate Business Operation Pattern of Supply Chain Finance

    NASA Astrophysics Data System (ADS)

    He, Xiangjun; Tang, Lingyun

    Supply Chain Finance, as a new financing pattern, has been arousing general attentions of scholars at home and abroad since its publication. This paper describes the author's understanding towards supply chain finance, makes classification of its business patterns in China from different perspectives, analyzes the existing problems and deficiencies of the business patterns, and finally puts forward the notion of building a visualization platform to innovate the business operation patterns and risk control modes of domestic supply chain finance.

  11. Use Patterns of Visual Cues in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bolliger, Doris U.

    2009-01-01

    Communication in the virtual environment can be challenging for participants because it lacks physical presence and nonverbal elements. Participants may have difficulties expressing their intentions and emotions in a primarily text-based course. Therefore, the use of visual communication elements such as pictographic and typographic marks can be

  12. Assessing the benefits of stereoscopic displays to visual search: methodology and initial findings

    NASA Astrophysics Data System (ADS)

    Godwin, Hayward J.; Holliman, Nick S.; Menneer, Tamaryn; Liversedge, Simon P.; Cave, Kyle R.; Donnelly, Nicholas

    2015-03-01

    Visual search is a task that is carried out in a number of important security and health related scenarios (e.g., X-ray baggage screening, radiography). With recent and ongoing developments in the technology available to present images to observers in stereoscopic depth, there has been increasing interest in assessing whether depth information can be used in complex search tasks to improve search performance. Here we outline the methodology that we developed, along with both software and hardware information, in order to assess visual search performance in complex, overlapping stimuli that also contained depth information. In doing so, our goal is to foster further research along these lines in the future. We also provide an overview with initial results of the experiments that we have conducted involving participants searching stimuli that contain overlapping objects presented on different depth planes to one another. Thus far, we have found that depth information does improve the speed (but not accuracy) of search, but only when the stimuli are highly complex and contain a significant degree of overlap. Depth information may therefore aid real-world search tasks that involve the examination of complex, overlapping stimuli.

  13. Predicting search time in visual scenes using the fuzzy logic approach

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Sohn, Eui J.; Singh, Harpreet; Elgarhi, Abdelakrim

    1999-07-01

    The mean search time of observers looking for targets in visual scenes with clutter is computed using the Fuzzy Logic Approach (FLA). The FLA is presented by the authors as a robust method for the computation of search times and or probabilities of detection for signature management decisions. The Mamdani/Assilian and Sugeno models have been investigated and are compared. A 44 image data set from TNO is used to build and validate the fuzzy logic model for detection. The input parameters are the: local luminance, range, aspect, width, wavelet edge points and the single output is search time. The Mamdani/Assilian model gave predicted mean search times from data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method then modeled using the FLA and results are compared to experiment.

  14. Display format and highlight validity effects on search performance using complex visual displays

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim; O'Brien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  15. Visual Search and Emotion: How Children with Autism Spectrum Disorders Scan Emotional Scenes

    ERIC Educational Resources Information Center

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J.; Casagrande, Maria

    2014-01-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either…

  16. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  17. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  18. Visual Search for Object Orientation Can Be Modulated by Canonical Orientation

    ERIC Educational Resources Information Center

    Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian

    2005-01-01

    The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…

  19. How You Move Is What You See: Action Planning Biases Selection in Visual Search

    ERIC Educational Resources Information Center

    Wykowska, Agnieszka; Schubo, Anna; Hommel, Bernhard

    2009-01-01

    Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection…

  20. Implicit short- and long-term memory direct our gaze in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-04-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing. PMID:26754811

  1. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.

  2. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is

  3. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  4. The Development of Visual Search in Infancy: Attention to Faces versus Salience

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2016-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and…

  5. Effect of pattern complexity on the visual span for Chinese and alphabet characters.

    PubMed

    Wang, Hui; He, Xuanzi; Legge, Gordon E

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity--lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  6. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  7. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean number of hits on Tuesdays and Wednesdays, whereas pop singer Justin Bieber had the most hits on Tuesdays. Both these tracked search queries also showed significantly lower numbers on Saturdays (P<.001). Conclusions Our study supports prior studies finding an increase in health information searching at the beginning of the workweek. However, we also found a similar pattern for 2 randomly chosen nonhealth-related terms, which may suggest that the search pattern is not unique to health-related searches. The results are potentially relevant beyond the field of health and our preliminary findings need to be further explored in future studies involving a broader range of nonhealth-related searches. PMID:26693859

  8. Earthdata Search: Combining New Services and Technologies for Earth Science Data Discovery, Visualization, and Access

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Pilone, D.

    2014-12-01

    A host of new services are revolutionizing discovery, visualization, and access of NASA's Earth science data holdings. At the same time, web browsers have become far more capable and open source libraries have grown to take advantage of these capabilities. Earthdata Search is a web application which combines modern browser features with the latest Earthdata services from NASA to produce a cutting-edge search and access client with features far beyond what was possible only a couple of years ago. Earthdata Search provides data discovery through the Common Metadata Repository (CMR), which provides a high-speed REST API for searching across hundreds of millions of data granules using temporal, spatial, and other constraints. It produces data visualizations by combining CMR data with Global Imagery Browse Services (GIBS) image tiles. Earthdata Search renders its visualizations using custom plugins built on Leaflet.js, a lightweight mobile-friendly open source web mapping library. The client further features an SVG-based interactive timeline view of search results. For data access, Earthdata Search provides easy temporal and spatial subsetting as well as format conversion by making use of OPeNDAP. While the client hopes to drive adoption of these services and standards, it provides fallback behavior for working with data that has not yet adopted them. This allows the client to remain on the cutting-edge of service offerings while still boasting a catalog containing thousands of data collections. In this session, we will walk through Earthdata Search and explain how it incorporates these new technologies and service offerings.

  9. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  10. VisualRank: applying PageRank to large-scale image search.

    PubMed

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines. PMID:18787237

  11. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations. PMID:26872100

  12. Cortical dynamics of contextually cued attentive visual learning and search: spatial and object evidence accumulation.

    PubMed

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-10-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. PMID:21038974

  13. Centre-of-Gravity Fixations in Visual Search: When Looking at Nothing Helps to Find Something.

    PubMed

    Venini, Dustin; Remington, Roger W; Horstmann, Gernot; Becker, Stefanie I

    2014-01-01

    In visual search, some fixations are made between stimuli on empty regions, commonly referred to as "centre-of-gravity" fixations (henceforth: COG fixations). Previous studies have shown that observers with task expertise show more COG fixations than novices. This led to the view that COG fixations reflect simultaneous encoding of multiple stimuli, allowing more efficient processing of task-related items. The present study tested whether COG fixations also aid performance in visual search tasks with unfamiliar and abstract stimuli. Moreover, to provide evidence for the multiple-item processing view, we analysed the effects of COG fixations on the number and dwell times of stimulus fixations. The results showed that (1) search efficiency increased with increasing COG fixations even in search for unfamiliar stimuli and in the absence of special higher-order skills, (2) COG fixations reliably reduced the number of stimulus fixations and their dwell times, indicating processing of multiple distractors, and (3) the proportion of COG fixations was dynamically adapted to potential information gain of COG locations. A second experiment showed that COG fixations are diminished when stimulus positions unpredictably vary across trials. Together, the results support the multiple-item processing view, which has important implications for current theories of visual search. PMID:25002972

  14. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  15. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, G.C.; Martinez, R.F.

    1999-05-04

    A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

  16. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  17. Visual cluster analysis and pattern recognition template and methods

    SciTech Connect

    Osbourn, G.C.; Martinez, R.F.

    1993-12-31

    This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  18. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  19. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  20. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. PMID:26970441

  1. Epistemic Beliefs, Online Search Strategies, and Behavioral Patterns While Exploring Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Meng-Jung; Hou, Huei-Tse; Tsai, Chin-Chung

    2014-06-01

    Online information searching tasks are usually implemented in a technology-enhanced science curriculum or merged in an inquiry-based science curriculum. The purpose of this study was to examine the role students' different levels of scientific epistemic beliefs (SEBs) play in their online information searching strategies and behaviors. Based on the measurement of an SEB survey, 42 undergraduate and graduate students in Taiwan were recruited from a pool of 240 students and were divided into sophisticated and naïve SEB groups. The students' self-perceived online searching strategies were evaluated by the Online Information Searching Strategies Inventory, and their search behaviors were recorded by screen-capture videos. A sequential analysis was further used to analyze the students' searching behavioral patterns. The results showed that those students with more sophisticated SEBs tended to employ more advanced online searching strategies and to demonstrate a more metacognitive searching pattern.

  2. Age-related differences in visual search in dynamic displays.

    PubMed

    Becic, Ensar; Kramer, Arthur F; Boot, Walter R

    2007-03-01

    The authors examined the ability of younger and older adults to detect changes in dynamic displays. Older and younger adults viewed displays containing numerous moving objects and were asked to respond when a new object was added to the display. Accuracy, response times, and eye movements were recorded. For both younger and older participants, the number of eye movements accounted for a large proportion of variance in transient detection performance. Participants who actively searched for the change performed significantly worse than did participants who employed a passive or covert scan strategy, indicating that passive scanning may be a beneficial strategy in certain dynamic environments. The cost of an active scan strategy was especially high for older participants in terms of both accuracy and response times. However, older adults who employed a passive or covert scan strategy showed greater improvement, relative to older active searchers, than did younger adults. These results highlight the importance of individual differences in scanning strategy in real-world dynamic, cluttered environments. PMID:17385984

  3. Learning From Data: Recognizing Glaucomatous Defect Patterns and Detecting Progression From Visual Field Measurements

    PubMed Central

    Yousefi, Siamak; Goldbaum, Michael H.; Balasubramanian, Madhusudhanan; Medeiros, Felipe A.; Zangwill, Linda M.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Weinreb, Robert N.

    2014-01-01

    A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods. PMID:24710816

  4. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    PubMed

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback. PMID:26356893

  5. Pattern generator system as a versatile visual stimulator.

    PubMed

    Kramer, D A; Greene, H L; Jagadeesh, J M

    1992-09-01

    We have designed and implemented a Motorola 68000 microprocessor-based pattern generator system (PGS) that uses a color video display terminal (VDT) to provide light stimuli to the intact vertebrate retina. This communication is intended for those who are considering acquisition of a commercial retinal stimulator or those who are custom designing their own pattern generator system. The discussion surveys the features to be included as well as design factors which must be considered in such a device. The memory organization of the PGS allows as stimuli multiple, complex patterns consisting of one or more disks, annuli, bars or gratings to flash or modulate in intensity according to a pre-defined function. In addition, patterns can move smoothly in any direction at selectable, uniform speeds without the re-drawing of video memory. The presence of a 12-bit A/D converter internal to the PGS allows a dynamic change in stimulus position, speed or pattern based upon physiological feedback. A physically realistic image size (0.9 cm2) and resolution (20 mu/pixel) in the retinal plane are achieved with simple intervening optics. The video field rate of 60 Hz is above the flicker fusion frequency for most vertebrate animals and does not induce artifacts in cellular responses. The PGS operating in a PC-based environment meets the requirements of a versatile optical stimulator for investigations in retinal electrophysiology. PMID:1474846

  6. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes. PMID:26892010

  7. Increased Vulnerability to Pattern-Related Visual Stress in Myalgic Encephalomyelitis.

    PubMed

    Wilson, Rachel L; Paterson, Kevin B; Hutchinson, Claire V

    2015-12-01

    The objective of this study was to determine vulnerability to pattern-related visual stress in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS). A total of 20 ME/CFS patients and 20 matched (age, gender) controls were recruited to the study. Pattern-related visual stress was determined using the Pattern Glare Test. Participants viewed three patterns, the spatial frequencies (SF) of which were 0.3 (low-SF), 2.3 (mid-SF), and 9.4 (high-SF) cycles per degree (c/deg). They reported the number of distortions they experienced when viewing each pattern. ME/CFS patients exhibited significantly higher pattern glare scores than controls for the mid-SF pattern. Mid-high SF differences were also significantly higher in patients than controls. These findings provide evidence of altered visual perception in ME/CFS. Pattern-related visual stress may represent an identifiable clinical feature of ME/CFS that will prove useful in its diagnosis. However, further research is required to establish if these symptoms reflect ME/CFS-related changes in the functioning of sensory neural pathways. PMID:26562880

  8. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  9. Through The Looking (Google) Glass: Attentional Costs in Distracted Visual Search.

    PubMed

    Lewis, Joanna; Neider, Mark

    2015-01-01

    Devices using a Heads-Up-Display (HUD), such as Google Glass (GG), provide users with a wide range of informational content, often while that user is engaged in a concurrent task. It is unclear, however, how such information might interfere with attentional processes. Here, we evaluated how a secondary task load presented on GG affects selective attention mechanisms. Participants completed a visual search task for an oriented T target among L distractors (50 or 80 set size) on a computer screen. Our primary manipulation was the nature of a secondary task via the use (or non-use) of GG. More specifically, participants performed the search task while they either did not wear GG (control condition), wore GG with no information presented on it, or wore the GG with a word presented on it. Additionally, we also manipulated the instructions given to the participant regarding the relevance of the information presented on the GG (e.g., useful, irrelevant, or ignore). When words were presented on the GG, we tested for recognition memory with a surprise recognition task composed of 50% new and old words following the visual search task. We found an RT cost during visual search associated with simply wearing GG compared to when participants searched without wearing GG (~258ms) and when secondary information was presented as compared to wearing GG with no information presented (~225ms). We found no interaction of search set size and GG condition, nor was there and effect of GG condition on search accuracy. Recognition memory was significantly above chance in all instruction conditions; even when participants were instructed that information presented on the GG should be ignored, there was still evidence that the information was processed. Overall, our findings suggest that information presented on HUDs, such as GG, may induce performance costs on concurrent tasks requiring selective attention. Meeting abstract presented at VSS 2015. PMID:26327048

  10. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  11. Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data.

    PubMed

    Bach, Benjamin; Shi, Conglei; Heulot, Nicolas; Madhyastha, Tara; Grabowski, Tom; Dragicevic, Pierre

    2016-01-01

    We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets. PMID:26529718

  12. PATTERN REVERSAL VISUAL EVOKED POTENTIALS IN AWAKE RATS

    EPA Science Inventory

    A method for recording pattern reversal evoked potentials (PREPs) from awake restrained rats has been developed. The procedure of Onofrj et al. was modified to eliminate the need for anesthetic, thereby avoiding possible interactions of the anesthetic with other manipulations of ...

  13. The NLP Swish Pattern: An Innovative Visualizing Technique.

    ERIC Educational Resources Information Center

    Masters, Betsy J.; And Others

    1991-01-01

    Describes swish pattern, one of many innovative therapeutic interventions that developers of neurolinguistic programing (NLP) have contributed to counseling profession. Presents brief overview of NLP followed by an explanation of the basic theory and expected outcomes of the swish. Presents description of the intervention process and case studies…

  14. Priming of Visual Search Facilitates Attention Shifts: Evidence From Object-Substitution Masking.

    PubMed

    Kristjánsson, Árni

    2016-03-01

    Priming of visual search strongly affects visual function, releasing items from crowding and during free-choice primed targets are chosen over unprimed ones. Two accounts of priming have been proposed: attentional facilitation of primed features and postperceptual episodic memory retrieval that involves mapping responses to visual events. Here, well-known masking effects were used to assess the two accounts. Object-substitution masking has been considered to reflect attentional processing: It does not occur when a target is precued and is strengthened when distractors are present. Conversely, metacontrast masking has been connected to lower level processing where attention exerts little effect. If priming facilitates attention shifts, it should mitigate object-substitution masking, while lower level masking might not be similarly influenced. Observers searched for an odd-colored target among distractors. Unpredictably (on 20% of trials), object-substitution masks or metacontrast masks appeared around the target. Object-substitution masking was strongly mitigated for primed target colors, while metacontrast masking was mostly unaffected. This argues against episodic retrieval accounts of priming, placing the priming locus firmly within the realm of attentional processing. The results suggest that priming of visual search facilitates attention shifts to the target, which allows better spatiotemporal resolution that overcomes object-substitution masking. PMID:26562865

  15. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  16. On the Role of Consonants and Vowels in Visual-Word Processing: Evidence with a Letter Search Paradigm

    ERIC Educational Resources Information Center

    Acha, Joana; Perea, Manuel

    2010-01-01

    Prior research has shown that the search function in the visual letter search task may reflect the regularities of the orthographic structure of a given script. In the present experiment, we examined whether the search function of letter detection was sensitive to consonant-vowel status of a pre-cued letter. Participants had to detect the

  17. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no

  18. Toddlers with Autism Spectrum Disorder are more successful at visual search than typically developing toddlers

    PubMed Central

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O’Riordan and colleagues (Plaisted, O’Riordan & Baron-Cohen, 1998; O’Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very young children (1–3-year-olds) – both typically developing or with ASD. We used an eye-tracker to measure looking behavior, providing fine-grained measures of visual search in 2.5-year-old toddlers with and without ASD (this representing the age by which many children may first receive a diagnosis of ASD). Importantly, our paradigm required no verbal instructions or feedback, making the task appropriate for toddlers who are pre- or nonverbal. We found that toddlers with ASD were more successful at finding the target than typically developing, age-matched controls. Further, our paradigm allowed us to estimate the number of items scrutinized per trial, revealing that for large set size conjunctive search, toddlers with ASD scrutinized as many as twice the number of items as typically developing toddlers, in the same amount of time. PMID:21884314

  19. Task-dependent modulation of word processing mechanisms during modified visual search tasks.

    PubMed

    Dampure, Julien; Benraiss, Abdelrhani; Vibert, Nicolas

    2016-06-01

    During visual search for words, the impact of the visual and semantic features of words varies as a function of the search task. This event-related potential (ERP) study focused on the way these features of words are used to detect similarities between the distractor words that are glanced at and the target word, as well as to then reject the distractor words. The participants had to search for a target word that was either given literally or defined by a semantic clue among words presented sequentially. The distractor words included words that resembled the target and words that were semantically related to the target. The P2a component was the first component to be modulated by the visual and/or semantic similarity of distractors to the target word, and these modulations varied according to the task. The same held true for the later N300 and N400 components, which confirms that, depending on the task, distinct processing pathways were sensitized through attentional modulation. Hence, the process that matches what is perceived with the target acts during the first 200 ms after word presentation, and both early detection and late rejection processes of words depend on the search task and on the representation of the target stored in memory. PMID:26176489

  20. Evaluation of a prototype search and visualization system for exploring scientific communities.

    PubMed

    Bales, Michael E; Kaufman, David R; Johnson, Stephen B

    2009-01-01

    Searches of bibliographic databases generate lists of articles but do little to reveal connections between authors, institutions, and grants. As a result, search results cannot be fully leveraged. To address this problem we developed Sciologer, a prototype search and visualization system. Sciologer presents the results of any PubMed query as an interactive network diagram of the above elements. We conducted a cognitive evaluation with six neuroscience and six obesity researchers. Researchers used the system effectively. They used geographic, color, and shape metaphors to describe community structure and made accurate inferences pertaining to a) collaboration among research groups; b) prominence of individual researchers; and c) differentiation of expertise. The tool confirmed certain beliefs, disconfirmed others, and extended their understanding of their own discipline. The majority indicated the system offered information of value beyond a traditional PubMed search and that they would use the tool if available. PMID:20351816

  1. Evaluation of a Prototype Search and Visualization System for Exploring Scientific Communities

    PubMed Central

    Bales, Michael E.; Kaufman, David R.; Johnson, Stephen B.

    2009-01-01

    Searches of bibliographic databases generate lists of articles but do little to reveal connections between authors, institutions, and grants. As a result, search results cannot be fully leveraged. To address this problem we developed Sciologer, a prototype search and visualization system. Sciologer presents the results of any PubMed query as an interactive network diagram of the above elements. We conducted a cognitive evaluation with six neuroscience and six obesity researchers. Researchers used the system effectively. They used geographic, color, and shape metaphors to describe community structure and made accurate inferences pertaining to a) collaboration among research groups; b) prominence of individual researchers; and c) differentiation of expertise. The tool confirmed certain beliefs, disconfirmed others, and extended their understanding of their own discipline. The majority indicated the system offered information of value beyond a traditional PubMed search and that they would use the tool if available. PMID:20351816

  2. The role of selective attention during visual search using random dot motion stimuli.

    PubMed

    Bolandnazar, Zeinab; Lennarz, Bianca; Mirpour, Koorosh; Bisley, James

    2015-01-01

    Finding objects among distractors is an essential everyday skill, which is often tested with visual search tasks using static items in the display. Although these kinds of displays are ideal for studying search behavior, the neural encoding of the visual stimuli can occur rapidly, which limits the analysis that can be done on the accumulation of evidence. Searching for a target among multiple random dot motion (RDM) stimuli should allow us to study the effect of attention on the accumulation of information during visual search. We trained an animal to make a saccade to a RDM stimulus with motion in a particular direction (the target). The animal began the task by fixating a central square. After a short delay, it changed to a dotted hollow square and one, two or four RDM stimuli appeared equally spaced in the periphery for 700 ms. The animal was rewarded for looking at the target, if present, or for maintaining fixation if the target was absent from the display. In the spread attention condition, all the dots in the RDM stimuli were the same color. In the focused attention condition, the color of the fixation square and the dotted hollow square matched the color of the dots in one RDM stimulus, which was a 100% valid cue. We varied the coherence of the RDM stimuli for each condition from 65 to 100% (100 ms limited lifetime). At the lower coherences, there were strong effects of set size and attention condition on both performance and reaction time. Our data show that using a RDM visual search task allows us to clearly illustrate the role of attention in the accumulation of perceptual evidence, which increases response accuracy and shortens reaction time. Meeting abstract presented at VSS 2015. PMID:26327054

  3. Color is processed less efficiently than orientation in change detection but more efficiently in visual search.

    PubMed

    Huang, Liqiang

    2015-05-01

    Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. PMID:25834029

  4. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  5. Spatial properties of objects predict patterns of neural response in the ventral visual pathway.

    PubMed

    Watson, David M; Young, Andrew W; Andrews, Timothy J

    2016-02-01

    Neuroimaging studies have revealed topographically organised patterns of response to different objects in the ventral visual pathway. These patterns are thought to be based on the form of the object. However, it is not clear what dimensions of object form are important. Here, we determined the extent to which spatial properties (energy across the image) could explain patterns of response in these regions. We compared patterns of fMRI response to images from different object categories presented at different retinal sizes. Although distinct neural patterns were evident to different object categories, changing the size (and thus the spatial properties) of the images had a significant effect on these patterns. Next, we used a computational approach to determine whether more fine-grained differences in the spatial properties can explain the patterns of neural response to different objects. We found that the spatial properties of the image were able to predict patterns of neural response, even when categorical factors were removed from the analysis. We also found that the effect of spatial properties on the patterns of response varies across the ventral visual pathway. These results show how spatial properties can be an important organising principle in the topography of the ventral visual pathway. PMID:26619786

  6. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  7. Adding a Visualization Feature to Web Search Engines: It’s Time

    SciTech Connect

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  8. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the results of a perplexity evaluation, the health information search patterns were best represented as a 5-gram sequence pattern. The most common patterns in group L1 were frequent query modifications, with relatively low search efficiency, and accessing and evaluating selected results from a health website. Group L2 performed frequent query modifications, but with better search efficiency, and accessed and evaluated selected results from a health website. Finally, the members of group L3 successfully discovered relevant results from the first query submission, performed verification by accessing several health websites after they discovered relevant results, and directly accessed consumer health information websites. Conclusions Familiarity with health topics affects health information search behaviors. Our analysis of state transitions in search activities detected unique behaviors and common search activity patterns in each familiarity group during health information searches. PMID:25783222

  9. Digital Pattern Search and Its Hybridization with Genetic Algorithms for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Nam-Geun; Park, Youngsu; Kim, Jong-Wook; Kim, Eunsu; Kim, Sang Woo

    In this paper, we present a recently developed pattern search method called Genetic Pattern Search algorithm (GPSA) for the global optimization of cost function subject to simple bounds. GPSA is a combined global optimization method using genetic algorithm (GA) and Digital Pattern Search (DPS) method, which has the digital structure represented by binary strings and guarantees convergence to stationary points from arbitrary starting points. The performance of GPSA is validated through extensive numerical experiments on a number of well known functions and on robot walking application. The optimization results confirm that GPSA is a robust and efficient global optimization method.

  10. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-01-01

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. PMID:25378369

  11. Sequential patterns mining and gene sequence visualization to discover novelty from microarray data.

    PubMed

    Sallaberry, A; Pecheur, N; Bringay, S; Roche, M; Teisseire, M

    2011-10-01

    Data mining allow users to discover novelty in huge amounts of data. Frequent pattern methods have proved to be efficient, but the extracted patterns are often too numerous and thus difficult to analyze by end users. In this paper, we focus on sequential pattern mining and propose a new visualization system to help end users analyze the extracted knowledge and to highlight novelty according to databases of referenced biological documents. Our system is based on three visualization techniques: clouds, solar systems, and treemaps. We show that these techniques are very helpful for identifying associations and hierarchical relationships between patterns among related documents. Sequential patterns extracted from gene data using our system were successfully evaluated by two biology laboratories working on Alzheimer's disease and cancer. PMID:21527357

  12. Evaluation of a dichromatic color-appearance simulation by a visual search task

    NASA Astrophysics Data System (ADS)

    Sunaga, Shoji; Ogura, Tomomi; Seno, Takeharu

    2013-03-01

    We used a visual search task to investigate the validity of the dichromatic simulation model proposed by Brettel et al. Although the dichromatic simulation could qualitatively predict reaction times for color-defective observers, the reaction times for color-defective observers tended to be longer than those of the trichromatic observers in Experiment 1. In Experiment 2, we showed that a reduction of purity excitation of simulated colors can provide a good prediction. Further, we propose an adaptive dichromatic simulation model based on the color differences between a simulated target color and simulated distractor colors in order to obtain a better quantitative prediction of reaction times in the visual search task for color defects.

  13. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    PubMed

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). PMID:25576537

  14. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-06-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database. PMID:25861807

  15. Analysis and modeling of fixation point selection for visual search in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Hoffman, James; Ruda, Harald

    2000-07-01

    Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.

  16. Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System

    PubMed Central

    Manduchi, R.; Coughlan, J.; Ivanchenko, V.

    2016-01-01

    We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755

  17. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  18. Long-term retention of skilled visual search: do young adults retain more than old adults?

    PubMed

    Fisk, A D; Hertzog, C; Lee, M D; Rogers, W A; Anderson-Garlach, M

    1994-06-01

    Young and old Ss received extensive consistent-mapping visual search practice (3,000 trials). The Ss returned to the laboratory following a 16-month retention interval. Retention of skilled visual search was assessed using the trained stimuli (assessment of retention of stimulus-specific learning) and using new stimuli (assessment of retention of task-specific learning). All Ss, regardless of age group, demonstrated impressive retention. However, age-related retention differences favoring the young were observed when retention of stimulus-specific learning was assessed. No age-related retention differences were observed when task-specific learning was assessed. The data suggest that age-related retention capabilities depend on the type of learning assessed. PMID:8054168

  19. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  20. Dissociated pattern of activity in visual cortices and their projections during human rapid eye movement sleep.

    PubMed

    Braun, A R; Balkin, T J; Wesensten, N J; Gwadry, F; Carson, R E; Varga, M; Baldwin, P; Belenky, G; Herscovitch, P

    1998-01-01

    Positron emission tomography was used to measure cerebral activity and to evaluate regional interrelationships within visual cortices and their projections during rapid eye movement (REM) sleep in human subjects. REM sleep was associated with selective activation of extrastriate visual cortices, particularly within the ventral processing stream, and an unexpected attenuation of activity in the primary visual cortex; increases in regional cerebral blood flow in extrastriate areas were significantly correlated with decreases in the striate cortex. Extrastriate activity was also associated with concomitant activation of limbic and paralimbic regions, but with a marked reduction of activity in frontal association areas including lateral orbital and dorsolateral prefrontal cortices. This pattern suggests a model for brain mechanisms subserving REM sleep where visual association cortices and their paralimbic projections may operate as a closed system dissociated from the regions at either end of the visual hierarchy that mediate interactions with the external world. PMID:9417032

  1. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  2. Model of visual contrast gain control and pattern masking

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Solomon, J. A.

    1997-01-01

    We have implemented a model of contrast gain and control in human vision that incorporates a number of key features, including a contrast sensitivity function, multiple oriented bandpass channels, accelerating nonlinearities, and a devisive inhibitory gain control pool. The parameters of this model have been optimized through a fit to the recent data that describe masking of a Gabor function by cosine and Gabor masks [J. M. Foley, "Human luminance pattern mechanisms: masking experiments require a new model," J. Opt. Soc. Am. A 11, 1710 (1994)]. The model achieves a good fit to the data. We also demonstrate how the concept of recruitment may accommodate a variant of this model in which excitatory and inhibitory paths have a common accelerating nonlinearity, but which include multiple channels tuned to different levels of contrast.

  3. Visual motion modulates pattern sensitivity ahead, behind, and beside motion.

    PubMed

    Arnold, Derek H; Marinovic, Welber; Whitney, David

    2014-05-01

    Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another - in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input - a 'predictive summation'. Another possible explanation is a phase sensitive 'spatial summation', a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position - it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250

  4. How does lesion conspicuity affect visual search strategy in mammogram reading?

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Hardesty, Lara

    2005-04-01

    In Mammography, gaze duration at given locations has been shown to positively correlate with decision outcome in those locations. Furthermore, most locations that contain an unreported malignant lesion attract the eye of experienced radiologists for almost as long as locations that contain correctly reported cancers. This suggests that faulty detection is not the main reason why cancers are missed; rather, failures in the perceptual and decision making processes in the location of these finding may be of significance as well. Models of medical image perception advocate that the decision to report or to dismiss a perceived finding depends not only on the finding itself but also on the background areas selected by the observer to compare the finding with, in order to determine its uniqueness. In this paper we studied the visual search strategy of experienced mammographers as they examined a case set containing cancer cases and lesion-free cases. For the cancer cases, two sets of mammograms were used: the ones in which the lesion was reported in the clinical practice, and the most recent prior mammogram. We determined how changes in lesion conspicuity between the prior mammogram to the most recent mammogram affected the visual search strategy of the observers. We represented the changes in visual search using spatial frequency analysis, and determined whether there were any significant differences between the prior and the most recent mammograms.

  5. Influence of being videotaped on the prevalence effect during visual search

    PubMed Central

    Miyazaki, Yuki

    2015-01-01

    Video monitoring modifies the task performance of those who are being monitored. The current study aims to prevent rare target-detection failures during visual search through the use of video monitoring. Targets are sometimes missed when their prevalence during visual search is extremely low (e.g., in airport baggage screenings). Participants performed a visual search in which they were required to discern the presence of a tool in the midst of other objects. The participants were monitored via video cameras as they performed the task in one session (the videotaped condition), and they performed the same task in another session without being monitored (the non-videotaped condition). The results showed that fewer miss errors occurred in the videotaped condition, regardless of target prevalence. It appears that the decrease in misses in the video monitoring condition resulted from a shift in criterion location. Video monitoring is considered useful in inducing accurate scanning. It is possible that the potential for evaluation involved in being observed motivates the participants to perform well and is related to the shift in criterion. PMID:25999895

  6. Spatial ranking strategy and enhanced peripheral vision discrimination optimize performance and efficiency of visual sequential search.

    PubMed

    Veneri, Giacomo; Pretegiani, Elena; Fargnoli, Francesco; Rosini, Francesca; Vinciguerra, Claudia; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra

    2014-09-01

    Visual sequential search might use a peripheral spatial ranking of the scene to put the next target of the sequence in the correct order. This strategy, indeed, might enhance the discriminative capacity of the human peripheral vision and spare neural resources associated with foveation. However, it is not known how exactly the peripheral vision sustains sequential search and whether the sparing of neural resources has a cost in terms of performance. To elucidate these issues, we compared strategy and performance during an alpha-numeric sequential task where peripheral vision was modulated in three different conditions: normal, blurred, or obscured. If spatial ranking is applied to increase the peripheral discrimination, its use as a strategy in visual sequencing should differ according to the degree of discriminative information that can be obtained from the periphery. Moreover, if this strategy spares neural resources without impairing the performance, its use should be associated with better performance. We found that spatial ranking was applied when peripheral vision was fully available, reducing the number and time of explorative fixations. When the periphery was obscured, explorative fixations were numerous and sparse; when the periphery was blurred, explorative fixations were longer and often located close to the items. Performance was significantly improved by this strategy. Our results demonstrated that spatial ranking is an efficient strategy adopted by the brain in visual sequencing to highlight peripheral detection and discrimination; it reduces the neural cost by avoiding unnecessary foveations, and promotes sequential search by facilitating the onset of a new saccade. PMID:24893753

  7. Visual search in natural scenes: a double-dissociation paradigm for comparing observer models.

    PubMed

    Abrams, Jared; Geisler, Wilson

    2015-01-01

    Search is a fundamental and ubiquitous visual behavior. Here, we aim to model fixation search under naturalistic conditions and develop a strong test for comparing observer models. Previous work has identified the entropy limit minimization (ELM) observer as an optimal fixation selection model.1 The ELM observer selects fixations that maximally reduce uncertainty about the location of the target. However, this rule is optimal only if the detectability of the target falls off in the same way for every possible fixation (e.g., as in a uniform noise field). Most natural scenes do not satisfy this assumption; they are highly non-stationary. By combining empirical measurements of target detectability with a simple mathematical analysis, we arrive at a generalized ELM rule (nELM) that is optimal for non-stationary backgrounds. Then, we used the nELM rule to generate search time predictions for Gaussian blob targets embedded in hundreds of natural images. We also simulated a maximum a posteriori (MAP) observer, which is a common model in the search literature. To examine which model is more similar to human performance, we developed a double-dissociation search paradigm, selecting pairs of target locations where the nELM and the MAP observer made opposite predictions regarding search speed. By comparing the difference in human search times for each pair with the different model predictions, we can determine which model predictions are more similar to human behavior. Preliminary data from two observers show that human observers behave more like the nELM than the MAP. We conclude that the nELM observer is a useful normative model of fixation search and appears to be a good model of human search in natural scenes. Additionally, the proposed double-dissociation paradigm provides as a strong test for comparing competing models. 1Najemnik, J. & Geisler W.S. (2009) Vis. Res., 49, 1286-1294. Meeting abstract presented at VSS 2015. PMID:26326443

  8. Modeling the Effect of Selection History on Pop-Out Visual Search

    PubMed Central

    Tseng, Yuan-Chi; Glaser, Joshua I.; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  9. Evaluating the human ongoing visual search performance by eye tracking application and sequencing tests.

    PubMed

    Veneri, Giacomo; Pretegiani, Elena; Rosini, Francesca; Federighi, Pamela; Federico, Antonio; Rufa, Alessandra

    2012-09-01

    Human visual search is an everyday activity that enables humans to explore the real world. Given the visual input, during a visual search, it is necessary to select some aspects of input to shift the gaze to next target. The aim of the study is to develop a mathematical method able to evaluate the visual selection process during the execution of a high cognitively demanding task such as the trial making test part B (TMT). The TMT is a neuro-psychological instrument where numbers and letters should be connected to each other in numeric and alphabetic order. We adapted the TMT to an eye-tracking version, and we used a vector model, the "eight pointed star" (8PS), to discover how selection (fixations) guides next exploration (saccades) and how human top-down factors interact with bottom-up saliency. The results reported a trend to move away from the last fixations correlated to the number of distracters and the execution performance. PMID:21453982

  10. Speed versus accuracy in visual search: Optimal performance and neural architecture.

    PubMed

    Chen, Bo; Perona, Pietro

    2015-12-01

    Searching for objects among clutter is a key ability of the visual system. Speed and accuracy are the crucial performance criteria. How can the brain trade off these competing quantities for optimal performance in different tasks? Can a network of spiking neurons carry out such computations, and what is its architecture? We propose a new model that takes input from V1-type orientation-selective spiking neurons and detects a target in the shortest time that is compatible with a given acceptable error rate. Subject to the assumption that the output of the primary visual cortex comprises Poisson neurons with known properties, our model is an ideal observer. The model has only five free parameters: the signal-to-noise ratio in a hypercolumn, the costs of false-alarm and false-reject errors versus the cost of time, and two parameters accounting for nonperceptual delays. Our model postulates two gain-control mechanisms-one local to hypercolumns and one global to the visual field-to handle variable scene complexity. Error rate and response time predictions match psychophysics data as we vary stimulus discriminability, scene complexity, and the uncertainty associated with each of these quantities. A five-layer spiking network closely approximates the optimal model, suggesting that known cortical mechanisms are sufficient for implementing visual search efficiently. PMID:26675879

  11. Allocation of cognitive resources in comparative visual search--individual and task dependent effects.

    PubMed

    Hardiess, Gregor; Mallot, Hanspeter A

    2015-08-01

    Behaviors recruit multiple, mutually substitutable types of cognitive resources (e.g., data acquisition and memorization in comparative visual search), and the allocation of resources is performed in a cost-optimizing way. If costs associated with each type of resource are manipulated, e.g., by varying the complexity of the items studied or the visual separation of the arrays to be compared, according adjustments of resource allocation ("trade-offs") have been demonstrated. Using between-subject designs, previous studies showed overall trade-off behavior but neglected inter-individual variability of trade-off behavior. Here, we present a simplified paradigm for comparative visual search in which gaze-measurements are replaced by switching of a visual mask covering one stimulus array at a time. This paradigm allows for a full within-subject design. While overall trade-off curves could be reproduced, we found that each subject used a specific trade-off strategy which differ substantially between subjects. Still, task-dependent adjustment of resource allocation can be demonstrated but accounts only for a minor part of the overall trade-off range. In addition, we show that the individual trade-offs were adjusted in an unconscious and rather intuitive way, enabling a robust manifestation of the selected strategy space. PMID:26093155

  12. The Development of Visual Search in Infants and Very Young Children.

    ERIC Educational Resources Information Center

    Gerhardstein, Peter; Rovee-Collier, Carolyn

    2002-01-01

    Trained 1- to 3-year-olds to touch a video screen displaying a unique target and appearing among varying numbers of distracters; correct responses triggered a sound and four animated objects on the screen. Found that children's reaction time patterns resembled those from adults in corresponding search tasks, suggesting that basic perceptual…

  13. Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.

    PubMed

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria

    2014-11-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information. PMID:24898908

  14. How much agreement is there in the visual search strategy of experts reading mammograms?

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia

    2008-03-01

    Previously we have shown that the eyes of expert breast imagers are attracted to the location of a malignant mass in a mammogram in less than 2 seconds after image onset. Moreover, the longer they take to visually fixate the location of the mass, the less likely it is that they will report it. We conjectured that this behavior was due to the formation of the initial hypothesis about the image (i.e., 'normal' - no lesions to report, or 'abnormal' - possible lesions to report). This initial hypothesis is formed as a result of a difference template between the experts' expectations of the image and the actual image. Hence, when the image is displayed, the expert detects the areas that do not correspond to their 'a priori expectation', and these areas get assigned weights according to the magnitude of the perturbation. The radiologist then uses eye movements to guide the high resolution fovea to each of these locations, in order to resolve each perturbation. To accomplish this task successfully the radiologist uses not only the local features in the area but also lateral comparisons with selected background locations, and this comprises the radiologist's visual search strategy. Eye-position tracking studies seem to suggest that no two radiologists search the breast parenchyma alike, which makes one wonder whether successful search models can be developed. In this study we show that there is more to the experts' search strategy than meets the eye.

  15. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    PubMed

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. PMID:26191616

  16. Training shortens search times in children with visual impairment accompanied by nystagmus

    PubMed Central

    Huurneman, Bianca; Boonstra, F. Nienke

    2014-01-01

    Perceptual learning (PL) can improve near visual acuity (NVA) in 49 year old children with visual impairment (VI). However, the mechanisms underlying improved NVA are unknown. The present study compares feature search and oculomotor measures in 49 year old children with VI accompanied by nystagmus (VI+nys [n = 33]) and children with normal vision (NV [n = 29]). Children in the VI+nys group were divided into three training groups: an experimental PL group, a control PL group, and a magnifier group. They were seen before (baseline) and after 6 weeks of training. Children with NV were only seen at baseline. The feature search task entailed finding a target E among distractor E's (pointing right) with element spacing varied in four steps: 0.04, 0.5, 1, and 2. At baseline, children with VI+nys showed longer search times, shorter fixation durations, and larger saccade amplitudes than children with NV. After training, all training groups showed shorter search times. Only the experimental PL group showed prolonged fixation duration after training at 0.5 and 2 spacing, p's respectively 0.033 and 0.021. Prolonged fixation duration was associated with reduced crowding and improved crowded NVA. One of the mechanisms underlying improved crowded NVA after PL in children with VI+nys seems to be prolonged fixation duration. PMID:25309473

  17. Visual Search and Attention in Blue Jays (Cyanocitta cristata): Associative Cuing and Sequential Priming

    PubMed Central

    Goto, Kazuhiro; Bond, Alan B.; Burks, Marianna; Kamil, Alan C.

    2014-01-01

    Visual search for complex natural targets requires focal attention, either cued by predictive stimulus associations or primed by a representation of the most recently detected target. Since both processes can focus visual attention, cuing and priming were compared in an operant search task to evaluate their relative impacts on performance and to determine the nature of their interaction in combined treatments. Blue jays were trained to search for pairs of alternative targets among distractors. Informative or ambiguous color cues were provided prior to each trial, and targets were presented either in homogeneous blocked sequences or in constrained random order. Initial task acquisition was facilitated by priming in general, but was significantly retarded when targets were both cued and primed, indicating that the two processes interfered with each other during training. At asymptote, attentional effects were manifested mainly in inhibition, increasing latency in miscued trials and decreasing accuracy on primed trials following an unexpected target switch. A combination of cuing and priming was found to interfere with performance in such unexpected trials, apparently a result of the limited capacity of working memory. Because the ecological factors that promote priming and cuing are rather disparate, it is not clear whether they ever jointly and simultaneously contribute to natural predatory search. PMID:24893217

  18. Visual search and attention in blue jays (Cyanocitta cristata): Associative cuing and sequential priming.

    PubMed

    Goto, Kazuhiro; Bond, Alan B; Burks, Marianna; Kamil, Alan C

    2014-04-01

    Visual search for complex natural targets requires focal attention, either cued by predictive stimulus associations or primed by a representation of the most recently detected target. Because both processes can focus visual attention, cuing and priming were compared in an operant search task to evaluate their relative impacts on performance and to determine the nature of their interaction in combined treatments. Blue jays were trained to search for pairs of alternative targets among distractors. Informative or ambiguous color cues were provided before each trial, and targets were presented either in homogeneous blocked sequences or in constrained random order. Initial task acquisition was facilitated by priming in general, but was significantly retarded when targets were both cued and primed, indicating that the two processes interfered with each other during training. At asymptote, attentional effects were manifested mainly in inhibition, increasing latency in miscued trials and decreasing accuracy on primed trials following an unexpected target switch. A combination of cuing and priming was found to interfere with performance in such unexpected trials, apparently a result of the limited capacity of working memory. Because the ecological factors that promote priming or cuing are rather disparate, it is not clear whether they ever simultaneously contribute to natural predatory search. PMID:24893217

  19. White matter hyperintensities are associated with visual search behavior independent of generalized slowing in aging.

    PubMed

    Lockhart, Samuel N; Roach, Alexandra E; Luck, Steven J; Geng, Joy; Beckett, Laurel; Carmichael, Owen; DeCarli, Charles

    2014-01-01

    A fundamental controversy is whether cognitive decline with advancing age can be entirely explained by decreased processing speed, or whether specific neural changes can elicit cognitive decline, independent of slowing. These hypotheses are anchored by studies of healthy older individuals where age is presumed the sole influence. Unfortunately, advancing age is also associated with asymptomatic brain white matter injury. We hypothesized that differences in white matter injury extent, manifest by MRI white matter hyperintensities (WMH), mediate differences in visual attentional control in healthy aging, beyond processing speed differences. We tested young and cognitively healthy older adults on search tasks indexing speed and attentional control. Increasing age was associated with generally slowed performance. WMH were also associated with slowed search times independent of processing speed differences. Consistent with evidence attributing reduced network connectivity to WMH, these results conclusively demonstrate that clinically silent white matter injury contributes to slower search performance indicative of compromised cognitive control, independent of generalized slowing of processing speed. PMID:24183716

  20. Patterned-String Tasks: Relation between Fine Motor Skills and Visual-Spatial Abilities in Parrots

    PubMed Central

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals. PMID:24376885

  1. Visualizing Nanoscopic Topography and Patterns in Freely Standing Thin Films

    NASA Astrophysics Data System (ADS)

    Sharma, Vivek; Zhang, Yiran; Yilixiati, Subinuer

    Thin liquid films containing micelles, nanoparticles, polyelectrolyte-surfactant complexes and smectic liquid crystals undergo thinning in a discontinuous, step-wise fashion. The discontinuous jumps in thickness are often characterized by quantifying changes in the intensity of reflected monochromatic light, modulated by thin film interference from a region of interest. Stratifying thin films exhibit a mosaic pattern in reflected white light microscopy, attributed to the coexistence of domains with various thicknesses, separated by steps. Using Interferometry Digital Imaging Optical Microscopy (IDIOM) protocols developed in the course of this study, we spatially resolve for the first time, the landscape of stratifying freely standing thin films. We distinguish nanoscopic rims, mesas and craters, and follow their emergence and growth. In particular, for thin films containing micelles of sodium dodecyl sulfate (SDS), these topological features involve discontinuous, thickness transitions with concentration-dependent steps of 5-25 nm. These non-flat features result from oscillatory, periodic, supramolecular structural forces that arise in confined fluids, and arise due to complex coupling of hydrodynamic and thermodynamic effects at the nanoscale.

  2. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and 48,084 for “Egor Bychkov”, compared to 53,403 for “Khimki” in Yandex. We found Google potentially provides timely search results, whereas Yandex provides more accurate geographic localization. The correlation was moderate to strong between search terms representing the Bychkov episode and terms representing salient drug issues in Yandex–“illicit drug treatment” (r s = .90, P < .001), "illicit drugs" (r s = .76, P < .001), and "drug addiction" (r s = .74, P < .001). Google correlations were weaker or absent–"illicit drug treatment" (r s = .12, P = .58), “illicit drugs ” (r s = -0.29, P = .17), and "drug addiction" (r s = .68, P < .001). Conclusions This study contributes to the methodological literature on the analysis of search patterns for public health. This paper investigated the relationship between Google and Yandex, and contributed to the broader methods literature by highlighting both the potential and limitations of these two search providers. We believe that Yandex Wordstat is a potentially valuable, and underused data source for researchers working on Russian-related illicit drug policy and other public health problems. The Russian Federation, with its large, geographically dispersed, and politically engaged online population presents unique opportunities for studying the evolving influence of the Internet on politics and policy, using low cost methods resilient against potential increases in censorship. PMID:23238600

  3. Job Search Patterns of College Graduates: The Role of Social Capital

    ERIC Educational Resources Information Center

    Coonfield, Emily S.

    2012-01-01

    This dissertation addresses job search patterns of college graduates and the implications of social capital by race and class. The purpose of this study is to explore (1) how the job search transpires for recent college graduates, (2) how potential social networks in a higher educational context, like KU, may make a difference for students with…

  4. Visual illusions in predator-prey interactions: birds find moving patterned prey harder to catch.

    PubMed

    Hämäläinen, Liisa; Valkonen, Janne; Mappes, Johanna; Rojas, Bibiana

    2015-09-01

    Several antipredator strategies are related to prey colouration. Some colour patterns can create visual illusions during movement (such as motion dazzle), making it difficult for a predator to capture moving prey successfully. Experimental evidence about motion dazzle, however, is still very scarce and comes only from studies using human predators capturing moving prey items in computer games. We tested a motion dazzle effect using for the first time natural predators (wild great tits, Parus major). We used artificial prey items bearing three different colour patterns: uniform brown (control), black with elongated yellow pattern and black with interrupted yellow pattern. The last two resembled colour patterns of the aposematic, polymorphic dart-poison frog Dendrobates tinctorius. We specifically tested whether an elongated colour pattern could create visual illusions when combined with straight movement. Our results, however, do not support this hypothesis. We found no differences in the number of successful attacks towards prey items with different patterns (elongated/interrupted) moving linearly. Nevertheless, both prey types were significantly more difficult to catch compared to the uniform brown prey, indicating that both colour patterns could provide some benefit for a moving individual. Surprisingly, no effect of background (complex vs. plain) was found. This is the first experiment with moving prey showing that some colour patterns can affect avian predators' ability to capture moving prey, but the mechanisms lowering the capture rate are still poorly understood. PMID:25947086

  5. Pattern identification or 3D visualization? How best to learn topographic map comprehension

    NASA Astrophysics Data System (ADS)

    Atit, Kinnari

    Science, Technology, Engineering, and Mathematics (STEM) experts employ many representations that novices find hard to use because they require a critical STEM skill, interpreting two-dimensional (2D) diagrams that represent three-dimensional (3D) information. The current research focuses on learning to interpret topographic maps. Understanding topographic maps requires knowledge of how to interpret the conventions of contour lines, and skill in visualizing that information in 3D (e.g. shape of the terrain). Novices find both tasks difficult. The present study compared two interventions designed to facilitate understanding for topographic maps to minimal text-only instruction. The 3D Visualization group received instruction using 3D gestures and models to help visualize three topographic forms. The Pattern Identification group received instruction using pointing and tracing gestures to help identify the contour patterns associated with the three topographic forms. The Text-based Instruction group received only written instruction explaining topographic maps. All participants then completed a measure of topographic map use. The Pattern Identification group performed better on the map use measure than participants in the Text-based Instruction group, but no significant difference was found between the 3D Visualization group and the other two groups. These results suggest that learning to identify meaningful contour patterns is an effective strategy for learning how to comprehend topographic maps. Future research should address if learning strategies for how to interpret the information represented on a diagram (e.g. identify patterns in the contour lines), before trying to visualize the information in 3D (e.g. visualize the 3D structure of the terrain), also facilitates students' comprehension of other similar types of diagrams.

  6. Gender Differences in Patterns of Searching the Web

    ERIC Educational Resources Information Center

    Roy, Marguerite; Chi, Michelene T. H.

    2003-01-01

    There has been a national call for increased use of computers and technology in schools. Currently, however, little is known about how students use and learn from these technologies. This study explores how eighth-grade students use the Web to search for, browse, and find information in response to a specific prompt (how mosquitoes find their…

  7. Optimization of boiling water reactor control rod patterns using linear search

    SciTech Connect

    Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.

    1984-10-01

    A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor.

  8. Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.

    PubMed

    Chassy, Philippe; Gobet, Fernand

    2013-01-01

    Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli. PMID:23320084

  9. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-01

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. PMID:26156982

  10. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  11. Patterns of visual attention to faces and objects in autism spectrum disorder

    PubMed Central

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns. Individuals with ASD obtained lower scores on measures of face recognition and social-emotional functioning but exhibited similar patterns of visual attention. In individuals with ASD, face recognition performance was associated with social adaptive function. Results highlight heterogeneity in manifestation of social deficits in ASD and suggest that naturalistic assessments are important for quantifying atypicalities in visual attention. PMID:20499148

  12. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  13. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.

  14. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  15. STATIONARY PATTERN ADAPTATION AND THE EARLY COMPONENTS IN HUMAN VISUAL EVOKED POTENTIALS

    EPA Science Inventory

    Pattern-onset visual evoked potentials were elicited from humans by sinusoidal gratings of 0.5., 1, 2 and 4 cpd (cycles/degree) following adaptation to a blank field or one of the gratings. The wave forms recorded after blank field adaptation showed an early positive component, P...

  16. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  17. Learning and Retention of Concepts Formed from Unfamiliar Visual Patterns. Final Report.

    ERIC Educational Resources Information Center

    Lantz, Alma E.

    Two experiments were conducted to investigate the learning and retention of concepts formed from novel visual stimulus materials (wave-form patterns). The purpose of the first experiment was to scale sets of wave forms as a function of difficulty, i.e., subjects were shown a prototype wave form and were asked to give same-different judgments for…

  18. The impact of clinical indications on visual search behaviour in skeletal radiographs

    NASA Astrophysics Data System (ADS)

    Rutledge, A.; McEntee, M. F.; Rainford, L.; O'Grady, M.; McCarthy, K.; Butler, M. L.

    2011-03-01

    The hazards associated with ionizing radiation have been documented in the literature and therefore justifying the need for X-ray examinations has come to the forefront of the radiation safety debate in recent years1. International legislation states that the referrer is responsible for the provision of sufficient clinical information to enable the justification of the medical exposure. Clinical indications are a set of systematically developed statements to assist in accurate diagnosis and appropriate patient management2. In this study, the impact of clinical indications upon fracture detection for musculoskeletal radiographs is analyzed. A group of radiographers (n=6) interpreted musculoskeletal radiology cases (n=33) with and without clinical indications. Radiographic images were selected to represent common trauma presentations of extremities and pelvis. Detection of the fracture was measured using ROC methodology. An eyetracking device was employed to record radiographers search behavior by analysing distinct fixation points and search patterns, resulting in a greater level of insight and understanding into the influence of clinical indications on observers' interpretation of radiographs. The influence of clinical information on fracture detection and search patterns was assessed. Findings of this study demonstrate that the inclusion of clinical indications result in impressionable search behavior. Differences in eye tracking parameters were also noted. This study also attempts to uncover fundamental observer search strategies and behavior with and without clinical indications, thus providing a greater understanding and insight into the image interpretation process. Results of this study suggest that availability of adequate clinical data should be emphasized for interpreting trauma radiographs.

  19. Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors

    PubMed Central

    Wild-Wall, Nele; Falkenstein, Michael; Gajewski, Patrick D.

    2012-01-01

    This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs) suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training. PMID:23029625

  20. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  1. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  2. An instructive role for patterned spontaneous retinal activity in mouse visual map development

    PubMed Central

    Xu, Hong-ping; Furman, Moran; Mineur, Yann S.; Chen, Hui; King, Sarah L.; Zenisek, David; Zhou, Z. Jimmy; Butts, Daniel A.; Tian, Ning; Picciotto, Marina R.; Crair, Michael C.

    2011-01-01

    SUMMARY Complex neural circuits in the mammalian brain develop through a combination of genetic instruction and activity-dependent refinement. The relative role of these factors and the form of neuronal activity responsible for circuit development is a matter of significant debate. In the mammalian visual system, retinal ganglion cell projections to the brain are mapped with respect to retinotopic location and eye of origin. We manipulated the pattern of spontaneous retinal waves present during development without changing overall activity levels through the transgenic expression of ?2-nicotinic acetylcholine receptors in retinal ganglion cells of mice. We used this manipulation to demonstrate that spontaneous retinal activity is not just permissive, but instructive in the emergence of eye-specific segregation and retinotopic refinement in the mouse visual system. This suggests that specific patterns of spontaneous activity throughout the developing brain are essential in the emergence of specific and distinct patterns of neuronal connectivity. PMID:21689598

  3. Child Looking Patterns: A Sudden Change in Visual Information Pick-Up at 32 Month of Age.

    ERIC Educational Resources Information Center

    Jacobsen, Karl; And Others

    This study examined infants' change in visual information pick-up, from an infant-like stimulus-locked visual scanning pattern to an adult-like cognitive control of visual information pick-up. Subjects were 21 children between 25 and 42 months of age. Eye movements were videotaped in a preferential looking situation and later analyzed as still…

  4. Ideal and visual-search observers: accounting for anatomical noise in search tasks with planar nuclear imaging

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Gifford, Howard C.

    2015-03-01

    Model observers have frequently been used for hardware optimization of imaging systems. For model observers to reliably mimic human performance it is important to account for the sources of variations in the images. Detection-localization tasks are complicated by anatomical noise present in the images. Several scanning observers have been proposed for such tasks. The most popular of these, the channelized Hotelling observer (CHO) incorporates anatomical variations through covariance matrices. We propose the visual-search (VS) observer as an alternative to the CHO to account for anatomical noise. The VS observer is a two-step process which first identifies suspicious tumor candidates and then performs a detailed analysis on them. The identification of suspicious candidates (search) implicitly accounts for anatomical noise. In this study we present a comparison of these two observers with human observers. The application considered is collimator optimization for planar nuclear imaging. Both observers show similar trends in performance with the VS observer slightly closer to human performance.

  5. Visual search strategies of baseball batters: eye movements during the preparatory phase of batting.

    PubMed

    Kato, Takaaki; Fukuda, Tadahiko

    2002-04-01

    The aim of this study was to analyze visual search strategies of baseball batters during the viewing period of the pitcher's motion. The 18 subjects were 9 experts and 9 novices. While subjects viewed a videotape which, from a right-handed batter's perspective, showed a pitcher throwing a series of 10 types of pitches, their eye movements were measured and analyzed. Novices moved their eyes faster than experts, and the distribution area of viewing points was also wider than that of the experts. The viewing duration of experts of the pitching arm was longer than those of novices during the last two pitching phases. These results indicate that experts set their visual pivot on the pitcher's elbow and used peripheral vision properties to evaluate the pitcher's motion and the ball trajectory. PMID:12027326

  6. The importance of being expert: top-down attentional control in visual search with photographs.

    PubMed

    Hershler, Orit; Hochstein, Shaul

    2009-10-01

    Two observers looking at the same picture may not see the same thing. To avoid sensory overload, visual information is actively selected for further processing by bottom-up processes, originating within the visual image, and top-down processes, reflecting the motivation and past experiences of the observer. The latter processes could grant categories of significance to the observer a permanent attentional advantage. Nevertheless, evidence for a generalized top-down advantage for specific categories has been limited. In this study, bird and car experts searched for face, car, or bird photographs in a heterogeneous display of photographs of real objects. Bottom-up influences were ruled out by presenting both groups of experts with identical displays. Faces and targets of expertise had a clear advantage over novice targets, indicating a permanent top-down preference for favored categories. A novel type of analysis of reaction times over the visual field suggests that the advantage for expert objects is achieved by broader detection windows, allowing observers to scan greater parts of the visual field for the presence of favored targets during each fixation. PMID:19801608

  7. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation

    PubMed Central

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-01-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  8. A little bit of history repeating: Splitting up multiple-target visual searches decreases second-target miss errors.

    PubMed

    Cain, Matthew S; Biggs, Adam T; Darling, Elise F; Mitroff, Stephen R

    2014-06-01

    Visual searches with several targets in a display have been shown to be particularly prone to miss errors in both academic laboratory searches and professional searches such as radiology and baggage screening. Specifically, finding 1 target in a display can reduce the likelihood of detecting additional targets. This phenomenon was originally referred to as "satisfaction of search," but is referred to here as "subsequent search misses" (SSMs). SSM errors have been linked to a variety of causes, and recent evidence supports a working memory deficit wherein finding a target consumes working memory resources that would otherwise aid subsequent search for additional targets (Cain & Mitroff, 2013). The current study demonstrated that dividing 1 multiple-target search into several single-target searches, separated by three to five unrelated trials, effectively freed the working memory resources used by the found target and eliminated SSM errors. This effect was demonstrated with both university community participants and with professional visual searchers from the Transportation Security Administration, suggesting it may be a generally applicable technique for improving multiple-target visual search accuracy. PMID:24708353

  9. Examining wide-arc digital breast tomosynthesis: optimization using a visual-search model observer

    NASA Astrophysics Data System (ADS)

    Das, Mini; Liang, Zhihua; Gifford, Howard C.

    2015-03-01

    Mathematical model observers are expected to assist in preclinical optimization of image acquisition and reconstruction parameters. A clinically realistic and robust model observer platform could help in multiparameter optimizations without requiring frequent human-observer validations. We are developing search-capable visual-search (VS) model observers with this potential. In this work, we present initial results on optimization of DBT scan angle and the number of projection views for low-contrast mass detection. Comparison with human-observer results shows very good agreement. These results point towards the benefits of using relatively wider arcs and low projection angles per arc degree for improved mass detection. These results are particularly interesting considering that the FDA-approved DBT systems like Hologic Selenia Dimensions uses a narrow (15-degree) acquisition arc and one projection per arc degree.

  10. Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues

    PubMed Central

    Kohlbecher, Stefan; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In “positive cue” blocks, participants were informed about possible colors of the target item. In “negative cue” blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line (“relevant items”). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value. PMID:26717307

  11. Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues.

    PubMed

    Kugler, Gnter; 't Hart, Bernard Marius; Kohlbecher, Stefan; Einhuser, Wolfgang; Schneider, Erich

    2015-01-01

    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In "positive cue" blocks, participants were informed about possible colors of the target item. In "negative cue" blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line ("relevant items"). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value. PMID:26717307

  12. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  13. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    NASA Astrophysics Data System (ADS)

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-01-01

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  14. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  15. You can detect the trees as well as the forest when adding the leaves: evidence from visual search tasks containing three-level hierarchical stimuli.

    PubMed

    Krakowski, Claire-Sara; Borst, Grégoire; Pineau, Arlette; Houdé, Olivier; Poirel, Nicolas

    2015-05-01

    The present study investigated how multiple levels of hierarchical stimuli (i.e., global, intermediate and local) are processed during a visual search task. Healthy adults participated in a visual search task in which a target was either present or not at one of the three levels of hierarchical stimuli (global geometrical form made by intermediate forms themselves constituted by local forms). By varying the number of distractors, the results showed that targets presented at global and intermediate levels were detected efficiently (i.e., the detection times did not vary with the number of distractors) whereas local targets were processed less efficiently (i.e., the detection times increased with the number of distractors). Additional experiments confirmed that these results were not due to the size of the target elements or to the spatial proximity among the structural levels. Taken together, these results show that the most local level is always processed less efficiently, suggesting that it is disadvantaged during the competition for attentional resources compared to higher structural levels. The present study thus supports the view that the processing occurring in visual search acts dichotomously rather than continuously. Given that pure structuralist and pure space-based models of attention cannot account for the pattern of our findings, we discuss the implication for perception, attentional selection and executive control of target position on hierarchical stimuli. PMID:25796055

  16. For better or worse: Prior trial accuracy affects current trial accuracy in visual search.

    PubMed

    Winkle, Jonathan; Biggs, Adam; Ericson, Justin; Mitroff, Stephen

    2015-01-01

    Life is not a series of independent events, but rather, each event is influenced by what just happened and what might happen next. However, many research studies treat any given trial as an independent and isolated event. Some research fields explicitly test trial-to-trial influences (e.g., repetition priming, task switching), but many, including visual search, largely ignore potential inter-trial effects. While trial-order effects could wash out with random presentation orders, this does not diminish their potential impact (e.g., would you want your radiologist to be negatively affected by his/her prior success in screening for cancer?). To examine biases related to prior trial performance, data were analyzed from airport security officers and Duke University participants who had completed a visual search task. Participants searched for a target "T" amongst "pseudo-L" distractors with 50% of trials containing a target. Four set sizes were used (8,16,24,32), and participants completed the search task without feedback. Inter-trial analyses revealed that accuracy for the current trial was related to the outcome of the previous trial, with trials following successful searches being approximately 10% more accurate than trials following failed searches. Pairs of target-absent or target-present trials predominantly drove this effect; specifically, accuracy on target-present trials was contingent on a previous hit or miss (i.e., other target-present trials), while accuracy on target-absent trials was contingent on a previous correct rejection or false alarm (i.e., other target-absent trials). Inter-trial effects arose in both population samples and were not driven by individual differences, as assessed by mixed-effects linear modeling. These results have both theoretical and practical implications. Theoretically, it is worth considering how to control for inter-trial variance in statistical models of behavior. Practically, characterizing the conditions that modulate inter-trial effects might help professionals searchers perform more accurately, which can have life-saving consequences. Meeting abstract presented at VSS 2015. PMID:26327059

  17. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1984-01-01

    Various flow visualization techniques were used to define the seondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious. Previously announced in STAR as N83-14435

  18. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1983-01-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  19. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Astrophysics Data System (ADS)

    Gaugler, R. E.; Russell, L. M.

    1983-03-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  20. A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search

    SciTech Connect

    Hart, W.E.

    1999-04-22

    The authors present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R{sup n}: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. They show that EPSAs can be cast as stochastic pattern search methods, and they use this observation to prove that EpSAs have a probabilistic weak stationary point convergence theory. This work provides the first convergence analysis for a class of evolutionary algorithms that guarantees convergence almost surely to a stationary point of a nonconvex objective function.

  1. Pattern drilling exploration: Optimum pattern types and hole spacings when searching for elliptical shaped targets

    USGS Publications Warehouse

    Drew, L.J.

    1979-01-01

    In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.

  2. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of background objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  3. Beneficial effects of the NMDA antagonist ketamine on decision processes in visual search.

    PubMed

    Shen, Kelly; Kalwarowsky, Sarah; Clarence, Wendy; Brunamonti, Emiliano; Paré, Martin

    2010-07-21

    The ability of sensory-motor circuits to integrate sensory evidence over time is thought to underlie the process of decision-making in perceptual discrimination. Recent work has suggested that the NMDA receptor contributes to mediating neural activity integration. To test this hypothesis, we trained three female rhesus monkeys (Macaca mulatta) to perform a visual search task, in which they had to make a saccadic eye movement to the location of a target stimulus presented among distracter stimuli of lower luminance. We manipulated NMDA-receptor function by administering an intramuscular injection of the noncompetitive NMDA antagonist ketamine and assessed visual search performance before and after manipulation. Ketamine was found to lengthen response latency in a dose-dependent fashion. Surprisingly, it was also observed that response accuracy was significantly improved when lower doses were administered. These findings suggest that NMDA receptors play a crucial role in the process of decision-making in perceptual discrimination. They also further support the idea that multiple neural representations compete with one another through mutual inhibition, which may explain the speed-accuracy trade-off rule that shapes discrimination behavior: lengthening integration time helps resolve small differences between choice alternatives, thereby improving accuracy. PMID:20660277

  4. Multimodal neuroimaging evidence linking memory and attention systems during visual search cued by context.

    PubMed

    Kasper, Ryan W; Grafton, Scott T; Eckstein, Miguel P; Giesbrecht, Barry

    2015-03-01

    Visual search can be facilitated by the learning of spatial configurations that predict the location of a target among distractors. Neuropsychological and functional magnetic resonance imaging (fMRI) evidence implicates the medial temporal lobe (MTL) memory system in this contextual cueing effect, and electroencephalography (EEG) studies have identified the involvement of visual cortical regions related to attention. This work investigated two questions: (1) how memory and attention systems are related in contextual cueing; and (2) how these systems are involved in both short- and long-term contextual learning. In one session, EEG and fMRI data were acquired simultaneously in a contextual cueing task. In a second session conducted 1 week later, EEG data were recorded in isolation. The fMRI results revealed MTL contextual modulations that were correlated with short- and long-term behavioral context enhancements and attention-related effects measured with EEG. An fMRI-seeded EEG source analysis revealed that the MTL contributed the most variance to the variability in the attention enhancements measured with EEG. These results support the notion that memory and attention systems interact to facilitate search when spatial context is implicitly learned. PMID:25586959

  5. Visual Circuit Development Requires Patterned Activity Mediated by Retinal Acetylcholine Receptors

    PubMed Central

    Burbridge, Timothy J.; Xu, Hong-Ping; Ackman, James B.; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z. Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C.

    2014-01-01

    SUMMARY The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial largely due to the inherent difficulty recording neural activity in early development. Here, we describe novel genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a de-coupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916

  6. Gene Expression Browser: large-scale and cross-experiment microarray data integration, management, search & visualization

    PubMed Central

    2010-01-01

    Background In the last decade, a large amount of microarray gene expression data has been accumulated in public repositories. Integrating and analyzing high-throughput gene expression data have become key activities for exploring gene functions, gene networks and biological pathways. Effectively utilizing these invaluable microarray data remains challenging due to a lack of powerful tools to integrate large-scale gene-expression information across diverse experiments and to search and visualize a large number of gene-expression data points. Results Gene Expression Browser is a microarray data integration, management and processing system with web-based search and visualization functions. An innovative method has been developed to define a treatment over a control for every microarray experiment to standardize and make microarray data from different experiments homogeneous. In the browser, data are pre-processed offline and the resulting data points are visualized online with a 2-layer dynamic web display. Users can view all treatments over control that affect the expression of a selected gene via Gene View, and view all genes that change in a selected treatment over control via treatment over control View. Users can also check the changes of expression profiles of a set of either the treatments over control or genes via Slide View. In addition, the relationships between genes and treatments over control are computed according to gene expression ratio and are shown as co-responsive genes and co-regulation treatments over control. Conclusion Gene Expression Browser is composed of a set of software tools, including a data extraction tool, a microarray data-management system, a data-annotation tool, a microarray data-processing pipeline, and a data search & visualization tool. The browser is deployed as a free public web service (http://www.ExpressionBrowser.com) that integrates 301 ATH1 gene microarray experiments from public data repositories (viz. the Gene Expression Omnibus repository at the National Center for Biotechnology Information and Nottingham Arabidopsis Stock Center). The set of Gene Expression Browser software tools can be easily applied to the large-scale expression data generated by other platforms and in other species. PMID:20727159

  7. Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search

    PubMed Central

    Wang, Shuo; Xu, Juan; Jiang, Ming; Zhao, Qi; Hurlemann, Rene; Adolphs, Ralph

    2015-01-01

    People with autism spectrum disorders (ASD) have pervasive impairments in social interactions, a diagnostic component that may have its roots in atypical social motivation and attention. One of the brain structures implicated in the social abnormalities seen in ASD is the amygdala. To further characterize the impairment of people with ASD in social attention, and to explore the possible role of the amygdala, we employed a series of visual search tasks with both social (faces and people with different postures, emotions, ages, and genders) and non-social stimuli (e.g., electronics, food, and utensils). We first conducted trial-wise analyses of fixation properties and elucidated visual search mechanisms. We found that an attentional mechanism of initial orientation could explain the detection advantage of non-social targets. We then zoomed into fixation-wise analyses. We defined target-relevant effects as the difference in the percentage of fixations that fell on target-congruent vs. target-incongruent items in the array. In Experiment 1, we tested 8 high-functioning adults with ASD, 3 adults with focal bilateral amygdala lesions, and 19 controls. Controls rapidly oriented to target-congruent items and showed a strong and sustained preference for fixating them. Strikingly, people with ASD oriented significantly less and more slowly to target-congruent items, an attentional deficit especially with social targets. By contrast, patients with amygdala lesions performed indistinguishably from controls. In Experiment 2, we recruited a different sample of 13 people with ASD and 8 healthy controls, and tested them on the same search arrays but with all array items equalized for low-level saliency. The results replicated those of Experiment 1. In Experiment 3, we recruited 13 people with ASD, 8 healthy controls, 3 amygdala lesion patients and another group of 11 controls and tested them on a simpler array. Here our group effect for ASD strongly diminished and all four subject groups showed similar target-relevant effects. These findings argue for an attentional deficit in ASD that is disproportionate for social stimuli, cannot be explained by low-level visual properties of the stimuli, and is more severe with high-load top-down task demands. Furthermore, this deficit appears to be independent of the amygdala, and not evident from general social bias independent of the target-directed search. PMID:25218953

  8. Giant honeybees ( Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees ( Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  9. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns

    NASA Astrophysics Data System (ADS)

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-08-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects.

  10. Giant honeybees (Apis dorsata) mob wasps away from the nest by directed visual patterns.

    PubMed

    Kastberger, Gerald; Weihmann, Frank; Zierler, Martina; Hötzl, Thomas

    2014-11-01

    The open nesting behaviour of giant honeybees (Apis dorsata) accounts for the evolution of a series of defence strategies to protect the colonies from predation. In particular, the concerted action of shimmering behaviour is known to effectively confuse and repel predators. In shimmering, bees on the nest surface flip their abdomens in a highly coordinated manner to generate Mexican wave-like patterns. The paper documents a further-going capacity of this kind of collective defence: the visual patterns of shimmering waves align regarding their directional characteristics with the projected flight manoeuvres of the wasps when preying in front of the bees' nest. The honeybees take here advantage of a threefold asymmetry intrinsic to the prey-predator interaction: (a) the visual patterns of shimmering turn faster than the wasps on their flight path, (b) they "follow" the wasps more persistently (up to 100 ms) than the wasps "follow" the shimmering patterns (up to 40 ms) and (c) the shimmering patterns align with the wasps' flight in all directions at the same strength, whereas the wasps have some preference for horizontal correspondence. The findings give evidence that shimmering honeybees utilize directional alignment to enforce their repelling power against preying wasps. This phenomenon can be identified as predator driving which is generally associated with mobbing behaviour (particularly known in selfish herds of vertebrate species), which is, until now, not reported in insects. PMID:25169944

  11. Linking pattern completion in the hippocampus to predictive coding in visual cortex.

    PubMed

    Hindy, Nicholas C; Ng, Felicia Y; Turk-Browne, Nicholas B

    2016-05-01

    Models of predictive coding frame perception as a generative process in which expectations constrain sensory representations. These models account for expectations about how a stimulus will move or change from moment to moment, but do not address expectations about what other, distinct stimuli are likely to appear based on prior experience. We show that such memory-based expectations in human visual cortex are related to the hippocampal mechanism of pattern completion. PMID:27065363

  12. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis. PMID:22656866

  13. Category-selective patterns of neural response to scrambled images in the ventral visual pathway.

    PubMed

    Coggan, David; Liu, Wanling; Baker, Daniel; Andrews, Timothy

    2015-01-01

    Neuroimaging studies have found reliable patterns of response to different object categories in the ventral visual pathway. This has been interpreted as evidence for a categorical representation of objects in this region. However, in addition to their semantic content, categories also differ in terms of their image properties. The aim of this study was to determine if image properties could explain category-selective patterns of neural response in the ventral visual pathway. We hypothesized that, if patterns of response in this region are tuned to low-level image properties, similar patterns of activity should also be evident for scrambled images that contain the same low-level properties, but are not perceived as objects. To address this issue, we generated phase-scrambled versions of intact objects in two ways: 1) globally-scrambled - applied to the whole image; 2) locally-scrambled - dividing each image into an 8x8 grid and scrambling the contents of each window independently. A behavioral study revealed that both scrambling processes rendered images unrecognizable. We then used fMRI to measure patterns of ventral response to five object categories (bottles, chairs, faces, houses and shoes) with three image conditions (intact, globally-scrambled, locally-scrambled). Using multivariate pattern analysis, we found distinct and reliable patterns for all five categories in intact and locally-scrambled image types. In contrast, the globally-scrambled images only showed reliable patterns for faces and houses. In addition, we found that the similarity matrices for the intact and locally-scrambled images were significantly correlated (r=0.79, p< 0.001). However, the similarity matrices from the intact and locally-scrambled images were not correlated with the globally-scrambled images. These results suggest that similar patterns of response are elicited by intact and locally scrambled images. Taken together, these data suggest that category-selective patterns of response in the ventral visual pathway can be explained by image properties typical of different object categories. Meeting abstract presented at VSS 2015. PMID:26326310

  14. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search.

    PubMed

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edgeenergy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  15. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search

    PubMed Central

    Ramkumar, Pavan; Fernandes, Hugo; Kording, Konrad; Segraves, Mark

    2015-01-01

    Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery. PMID:25814545

  16. Visual Search Strategies of Soccer Players Executing a Power vs. Placement Penalty Kick

    PubMed Central

    Timmis, Matthew A.; Turner, Kieran; van Paridon, Kjell N.

    2014-01-01

    Introduction When taking a soccer penalty kick, there are two distinct kicking techniques that can be adopted; a ‘power’ penalty or a ‘placement’ penalty. The current study investigated how the type of penalty kick being taken affected the kicker’s visual search strategy and where the ball hit the goal (end ball location). Method Wearing a portable eye tracker, 12 university footballers executed 2 power and placement penalty kicks, indoors, both with and without the presence of a goalkeeper. Video cameras were used to determine initial ball velocity and end ball location. Results When taking the power penalty, the football was kicked significantly harder and more centrally in the goal compared to the placement penalty. During the power penalty, players fixated on the football for longer and more often at the goalkeeper (and by implication the middle of the goal), whereas in the placement penalty, fixated longer at the goal, specifically the edges. Findings remained consistent irrespective of goalkeeper presence. Discussion/conclusion Findings indicate differences in visual search strategy and end ball location as a function of type of penalty kick. When taking the placement penalty, players fixated and kicked the football to the edges of the goal in an attempt to direct the ball to an area that the goalkeeper would have difficulty reaching and saving. Fixating significantly longer on the football when taking the power compared to placement penalty indicates a greater importance of obtaining visual information from the football. This can be attributed to ensuring accurate foot-to-ball contact and subsequent generation of ball velocity. Aligning gaze and kicking the football centrally in the goal when executing the power compared to placement penalty may have been a strategy to reduce the risk of kicking wide of the goal altogether. PMID:25517405

  17. Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.

    PubMed

    Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen

    2016-01-01

    Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. PMID:26049037

  18. Spatial memory relative to the 3D environment guides body orientation in visual search.

    PubMed

    Aivar, M Pilar; Li, Chia-Ling; Kit, Dmitry; Tong, Matthew; Hayhoe, Mary

    2015-01-01

    Measurement of eye movements has revealed rapid development of memory for object locations in 3D immersive environments. To examine the nature of that representation, and to see if memory is coded with respect to the 3D coordinates of the room, head position was recorded while participants performed a visual search task in an immersive virtual reality apartment. The apartment had two rooms, connected by a corridor. Participants searched the apartment for a series of geometric target objects. Some target objects were always placed at the same location (stable objects), while others appeared at a new location in each trial (random objects). We analyzed whether body movements showed changes that reflected memory for target location. In each trial we calculated how far the participant's trajectory deviated from a straight path to the target object. Changes in head orientation from the moment the room was entered to the moment the target was reached were also computed. We found that the average deviation from the straight path was larger and more variable for random target objects (.47 vs .31 meters). Also the point of maximum deviation from the straight path occurred earlier for random objects than for stable objects (at 42% vs 52% of the total trajectory). On room entry lateral head deviation from the room center was already bigger for stable objects than for random objects (18º vs. 10º). Thus for random objects participants move to the center of the room until the target is located, while for stable objects subjects are more likely to follow a straight trajectory from first entry. We conclude that memory for target location is coded with respect to room coordinates and is revealed by body orientation at first entry. The visually guided component of search seems to be relatively unimportant or occurs very quickly upon entry. Meeting abstract presented at VSS 2015. PMID:26326635

  19. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  20. Noun representation in AAC grid displays: visual attention patterns of people with traumatic brain injury.

    PubMed

    Brown, Jessica; Thiessen, Amber; Beukelman, David; Hux, Karen

    2015-03-01

    Clinicians supporting the communication of people with traumatic brain injury (TBI) must determine an efficient message representation method for augmentative and alternative communication (AAC) systems. Due to the frequency with which visual deficits occur following brain injury, some adults with TBI may have difficulty locating items on AAC displays. The purpose of this study was to identify aspects of graphic supports that increase efficiency of target-specific visual searches. Nine adults with severe TBI and nine individuals without neurological conditions located targets on static grids displaying one of three message representation methods. Data collected through eye tracking technology revealed significantly more efficient target location for icon-only grids than for text-only or icon-plus-text grids for both participant groups; no significant differences emerged between participant groups. PMID:25685881

  1. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    SciTech Connect

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-10-09

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASAWorld Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the hot spot areas of research. Most importantly, our methods demonstrate an easy way to geovisualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  2. Differential Expression Patterns of occ1-Related Genes in Adult Monkey Visual Cortex

    PubMed Central

    Takahata, Toru; Komatsu, Yusuke; Watakabe, Akiya; Hashikawa, Tsutomu; Tochitani, Shiro

    2009-01-01

    We have previously revealed that occ1 is preferentially expressed in the primary visual area (V1) of the monkey neocortex. In our attempt to identify more area-selective genes in the macaque neocortex, we found that testican-1, an occ1-related gene, and its family members also exhibit characteristic expression patterns along the visual pathway. The expression levels of testican-1 and testican-2 mRNAs as well as that of occ1 mRNA start of high in V1, progressively decrease along the ventral visual pathway, and end of low in the temporal areas. Complementary to them, the neuronal expression of SPARC mRNA is abundant in the association areas and scarce in V1. Whereas occ1, testican-1, and testican-2 mRNAs are preferentially distributed in thalamorecipient layers including “blobs,” SPARC mRNA expression avoids these layers. Neither SC1 nor testican-3 mRNA expression is selective to particular areas, but SC1 mRNA is abundantly observed in blobs. The expressions of occ1, testican-1, testican-2, and SC1 mRNA were downregulated after monocular tetrodotoxin injection. These results resonate with previous works on chemical and functional gradients along the primate occipitotemporal visual pathway and raise the possibility that these gradients and functional architecture may be related to the visual activity–dependent expression of these extracellular matrix glycoproteins. PMID:19073625

  3. Are summary statistics enough? Evidence for the importance of shape in guiding visual search

    PubMed Central

    Alexander, Robert G.; Schmidt, Joseph; Zelinsky, Gregory J.

    2015-01-01

    Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search. PMID:26180505

  4. SVM-based visual-search model observers for PET tumor detection

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.; Sen, Anando; Azencott, Robert

    2015-03-01

    Many search-capable model observers follow task paradigms that specify clinically unrealistic prior knowledge about the anatomical backgrounds in study images. Visual-search (VS) observers, which implement distinct, feature-based candidate search and analysis stages, may provide a means of avoiding such paradigms. However, VS observers that conduct single-feature analysis have not been reliable in the absence of any background information. We investigated whether a VS observer based on multifeature analysis can overcome this background dependence. The testbed was a localization ROC (LROC) study with simulated whole-body PET images. Four target-dependent morphological features were defined in terms of 2D cross-correlations involving a known tumor profile and the test image. The feature values at the candidate locations in a set of training images were fed to a support-vector machine (SVM) to compute a linear discriminant that classified locations as tumor-present or tumor-absent. The LROC performance of this SVM-based VS observer was compared against the performances of human observers and a pair of existing model observers.

  5. Visual recognition based on temporal cortex cells: viewer-centred processing of pattern configuration.

    PubMed

    Perrett, D I; Oram, M W

    1998-01-01

    A model of recognition is described based on cell properties in the ventral cortical stream of visual processing in the primate brain. At a critical intermediate stage in this system, 'Elaborate' feature sensitive cells respond selectively to visual features in a way that depends on size (+/- 1 octave), orientation (+/- 45 degrees) but does not depend on position within central vision (+/- 5 degrees). These features are simple conjunctions of 2-D elements (e.g. a horizontal dark area above a dark smoothly convex area). They can arise either as elements of an object's surface pattern or as a 3-D component bounded by an object's external contour. By requiring a combination of several such features without regard to their position within the central region of the visual image, 'Pattern' sensitive cells at higher levels can exhibit selectivity for complex configurations that typify objects seen under particular viewing conditions. Given that input features to such Pattern sensitive cells are specified in approximate size and orientation, initial cellular 'representations' of the visual appearance of object type (or object example) are also selective for orientation and size. At this level, sensitivity to object view (+/- 60 degrees) arises because visual features disappear as objects are rotated in perspective. Processing is thus viewer-centred and the neurones only respond to objects seen from particular viewing conditions or 'object instances'. Combined sensitivity to multiple features (conjunctions of elements) independent of their position, establishes selectivity for the configurations of object parts (from one view) because rearranged configurations of the same parts yield images lacking some of the 2-D visual features present in the normal configuration. Different neural populations appear to be selectively tuned to particular components of the same biological object (e.g. face, eyes, hands, legs), perhaps because the independent articulation of these components gives rise to correlated activity in different sets of input visual features. Generalisation over viewing conditions for a given object can be established by hierarchically pooling outputs of view-condition specific cells with pooling operations dependent on the continuity in experience across viewing conditions. Different object parts are seen together and different views are seen in succession when the observer walks around the object. The view specific coding that characterises the selectivity of cells in the temporal lobe can be seen as a natural consequence of selective experience of objects from particular vantage points. View specific coding for the face and body also has great utility in understanding complex social signals, a property that may not be feasible with object-centred processing. PMID:9755511

  6. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether

  7. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  8. Age and distraction are determinants of performance on a novel visual search task in aged Beagle dogs.

    PubMed

    Snigdha, Shikha; Christie, Lori-Ann; De Rivera, Christina; Araujo, Joseph A; Milgram, Norton W; Cotman, Carl W

    2012-02-01

    Aging has been shown to disrupt performance on tasks that require intact visual search and discrimination abilities in human studies. The goal of the present study was to determine if canines show age-related decline in their ability to perform a novel simultaneous visual search task. Three groups of canines were included: a young group (N = 10; 3 to 4.5 years), an old group (N = 10; 8 to 9.5 years), and a senior group (N = 8; 11 to 15.3 years). Subjects were first tested for their ability to learn a simple two-choice discrimination task, followed by the visual search task. Attentional demands in the task were manipulated by varying the number of distracter items; dogs received an equal number of trials with either zero, one, two, or three distracters. Performance on the two-choice discrimination task varied with age, with senior canines making significantly more errors than the young. Performance accuracy on the visual search task also varied with age; senior animals were significantly impaired compared to both the young and old, and old canines were intermediate in performance between young and senior. Accuracy decreased significantly with added distracters in all age groups. These results suggest that aging impairs the ability of canines to discriminate between task-relevant and -irrelevant stimuli. This is likely to be derived from impairments in cognitive domains such as visual memory and learning and selective attention. PMID:21336566

  9. iRaster: a novel information visualization tool to explore spatiotemporal patterns in multiple spike trains.

    PubMed

    Somerville, J; Stuart, L; Sernagor, E; Borisyuk, R

    2010-12-15

    Over the last few years, simultaneous recordings of multiple spike trains have become widely used by neuroscientists. Therefore, it is important to develop new tools for analysing multiple spike trains in order to gain new insight into the function of neural systems. This paper describes how techniques from the field of visual analytics can be used to reveal specific patterns of neural activity. An interactive raster plot called iRaster has been developed. This software incorporates a selection of statistical procedures for visualization and flexible manipulations with multiple spike trains. For example, there are several procedures for the re-ordering of spike trains which can be used to unmask activity propagation, spiking synchronization, and many other important features of multiple spike train activity. Additionally, iRaster includes a rate representation of neural activity, a combined representation of rate and spikes, spike train removal and time interval removal. Furthermore, it provides multiple coordinated views, time and spike train zooming windows, a fisheye lens distortion, and dissemination facilities. iRaster is a user friendly, interactive, flexible tool which supports a broad range of visual representations. This tool has been successfully used to analyse both synthetic and experimentally recorded datasets. In this paper, the main features of iRaster are described and its performance and effectiveness are demonstrated using various types of data including experimental multi-electrode array recordings from the ganglion cell layer in mouse retina. iRaster is part of an ongoing research project called VISA (Visualization of Inter-Spike Associations) at the Visualization Lab in the University of Plymouth. The overall aim of the VISA project is to provide neuroscientists with the ability to freely explore and analyse their data. The software is freely available from the Visualization Lab website (see www.plymouth.ac.uk/infovis). PMID:20875457

  10. Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location.

    PubMed

    Pollmann, Stefan; Eštočinová, Jana; Sommer, Susanne; Chelazzi, Leonardo; Zinke, Wolf

    2016-01-01

    Spatial contextual cueing reflects an incidental form of learning that occurs when spatial distractor configurations are repeated in visual search displays. Recently, it was reported that the efficiency of contextual cueing can be modulated by reward. We replicated this behavioral finding and investigated its neural basis with fMRI. Reward value was associated with repeated displays in a learning session. The effect of reward value on context-guided visual search was assessed in a subsequent fMRI session without reward. Structures known to support explicit reward valuation, such as ventral frontomedial cortex and posterior cingulate cortex, were modulated by incidental reward learning. Contextual cueing, leading to more efficient search, went along with decreased activation in the visual search network. Retrosplenial cortex played a special role in that it showed both a main effect of reward and a reward×configuration interaction and may thereby be a central structure for the reward modulation of context-guided visual search. PMID:26427645

  11. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica.

    PubMed

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675

  12. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica

    PubMed Central

    Kang, Changku; Kim, Ye Eun; Jang, Yikweon

    2016-01-01

    Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675

  13. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data.

    PubMed

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  14. HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

    2013-12-01

    Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds. This allows the user to quickly explore each image, ascertain the main units of surface material, localize outliers, and develop an understanding of the various materials' spectral characteristics. The HSIFind software suite is currently in beta testing at the Planetary Science Institute and a process is underway to release it under an open source license to the broader community. We believe it will benefit instrument operations during remote planetary exploration, where tactical mission decisions demand rapid analysis of each new dataset. The approach also holds potential for public spectral catalogs where its shallow learning curve and portability can make these datasets accessible to a much wider range of researchers. Acknowledgements: The HSIFind project acknowledges the NASA Advanced MultiMission Operating System (AMMOS) and the Multimission Ground Support Services (MGSS). E. Palmer is with the Planetary Science Institute, Tucson, AZ. Other authors are with the Jet Propulsion Laboratory, Pasadena, CA. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Copyright 2013, California Institute of Technology.

  15. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data

    PubMed Central

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  16. The Autism-Spectrum Quotient and Visual Search: Shallow and Deep Autistic Endophenotypes.

    PubMed

    Gregory, B L; Plaisted-Grant, K C

    2016-05-01

    A high Autism-Spectrum Quotient (AQ) score (Baron-Cohen et al. in J Autism Dev Disord 31(1):5-17, 2001) is increasingly used as a proxy in empirical studies of perceptual mechanisms in autism. Several investigations have assessed perception in non-autistic people measured for AQ, claiming the same relationship exists between performance on perceptual tasks in high-AQ individuals as observed in autism. We question whether the similarity in performance by high-AQ individuals and autistics reflects the same underlying perceptual cause in the context of two visual search tasks administered to a large sample of typical individuals assessed for AQ. Our results indicate otherwise and that deploying the AQ as a proxy for autism introduces unsubstantiated assumptions about high-AQ individuals, the endophenotypes they express, and their relationship to Autistic Spectrum Conditions (ASC) individuals. PMID:24077740

  17. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  18. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    PubMed Central

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  19. Simultaneous tDCS-fMRI Identifies Resting State Networks Correlated with Visual Search Enhancement

    PubMed Central

    Callan, Daniel E.; Falcone, Brian; Wada, Atsushi; Parasuraman, Raja

    2016-01-01

    This study uses simultaneous transcranial direct current stimulation (tDCS) and functional MRI (fMRI) to investigate tDCS modulation of resting state activity and connectivity that underlies enhancement in behavioral performance. The experiment consisted of three sessions within the fMRI scanner in which participants conducted a visual search task: Session 1: Pre-training (no performance feedback), Session 2: Training (performance feedback given), Session 3: Post-training (no performance feedback). Resting state activity was recorded during the last 5 min of each session. During the 2nd session one group of participants underwent 1 mA tDCS stimulation and another underwent sham stimulation over the right posterior parietal cortex. Resting state spontaneous activity, as measured by fractional amplitude of low frequency fluctuations (fALFF), for session 2 showed significant differences between the tDCS stim and sham groups in the precuneus. Resting state functional connectivity from the precuneus to the substantia nigra, a subcortical dopaminergic region, was found to correlate with future improvement in visual search task performance for the stim over the sham group during active stimulation in session 2. The after-effect of stimulation on resting state functional connectivity was measured following a post-training experimental session (session 3). The left cerebellum Lobule VIIa Crus I showed performance related enhancement in resting state functional connectivity for the tDCS stim over the sham group. The ability to determine the relationship that the relative strength of resting state functional connectivity for an individual undergoing tDCS has on future enhancement in behavioral performance has wide ranging implications for neuroergonomic as well as therapeutic, and rehabilitative applications. PMID:27014014

  20. Simultaneous tDCS-fMRI Identifies Resting State Networks Correlated with Visual Search Enhancement.

    PubMed

    Callan, Daniel E; Falcone, Brian; Wada, Atsushi; Parasuraman, Raja

    2016-01-01

    This study uses simultaneous transcranial direct current stimulation (tDCS) and functional MRI (fMRI) to investigate tDCS modulation of resting state activity and connectivity that underlies enhancement in behavioral performance. The experiment consisted of three sessions within the fMRI scanner in which participants conducted a visual search task: Session 1: Pre-training (no performance feedback), Session 2: Training (performance feedback given), Session 3: Post-training (no performance feedback). Resting state activity was recorded during the last 5 min of each session. During the 2nd session one group of participants underwent 1 mA tDCS stimulation and another underwent sham stimulation over the right posterior parietal cortex. Resting state spontaneous activity, as measured by fractional amplitude of low frequency fluctuations (fALFF), for session 2 showed significant differences between the tDCS stim and sham groups in the precuneus. Resting state functional connectivity from the precuneus to the substantia nigra, a subcortical dopaminergic region, was found to correlate with future improvement in visual search task performance for the stim over the sham group during active stimulation in session 2. The after-effect of stimulation on resting state functional connectivity was measured following a post-training experimental session (session 3). The left cerebellum Lobule VIIa Crus I showed performance related enhancement in resting state functional connectivity for the tDCS stim over the sham group. The ability to determine the relationship that the relative strength of resting state functional connectivity for an individual undergoing tDCS has on future enhancement in behavioral performance has wide ranging implications for neuroergonomic as well as therapeutic, and rehabilitative applications. PMID:27014014

  1. Repetitive transcranial magnetic stimulation over the left parietal cortex facilitates visual search for a letter among its mirror images.

    PubMed

    Mangano, Giuseppa Renata; Oliveri, Massimiliano; Turriziani, Patrizia; Smirni, Daniela; Zhaoping, Li; Cipolotti, Lisa

    2015-04-01

    Interference by task irrelevant information is seen in visual search paradigms using letters. Thus, it is harder to find the letter 'N' among its mirror reversals 'И' than vice versa. This observation, termed the reversed letter effect, involves both a linguistic association and an interference of task irrelevant information—the shape of 'N' or 'И' is irrelevant, the search requires merely distinguishing the tilts of oblique bars. We adapted the repetitive transcranial magnetic stimulation (rTMS) methods that we previously used, and conducted three rTMS experiments using healthy subjects. The first experiment investigated the effects of rTMS on the left and right posterior parietal cortex (PPC) on the search performance. The second experiment focused on the role of the left PPC. The third experiment explored whether another left posterior region, known to be involved in word reading (ventral occipito-temporal cortex, vOTC), plays a role. We found that rTMS on right PPC and left VOTC had no effect on the speed and accuracy of the visual search regardless of whether the target is 'N' or its mirror reversal. In contrast, rTMS on the left PPC speeded up the search on finding target 'N' among its mirror images. We suggest that left PPC is involved in letter recognition, and that rTMS on left PPC facilitated our visual search task by reducing task interference triggered by task irrelevant letter recognition. PMID:25744867

  2. Search and retrieval of plasma wave forms: Structural pattern recognition approach

    NASA Astrophysics Data System (ADS)

    Dormido-Canto, S.; Farias, G.; Vega, J.; Dormido, R.; Snchez, J.; Duro, N.; Santos, M.; Martin, J. A.; Pajares, G.

    2006-10-01

    Databases for fusion experiments are designed to store several million wave forms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques can allow identification of similar plasma behaviors. Further developments in this area must be focused on four aspects: large databases, feature extraction, similarity function, and search/retrieval efficiency. This article describes an approach for pattern searching within wave forms. The technique is performed in three stages. Firstly, the signals are filtered. Secondly, signals are encoded according to a discrete set of values (code alphabet). Finally, pattern recognition is carried out via string comparisons. The definition of code alphabets enables the description of wave forms as strings, instead of representing the signals in terms of multidimensional data vectors. An alphabet of just five letters can be enough to describe any signal. In this way, signals can be stored as a sequence of characters in a relational database, thereby allowing the use of powerful structured query languages to search for patterns and also ensuring quick data access.

  3. Pattern recognition-assisted infrared library searching of automotive clear coats.

    PubMed

    Fasasi, Ayuba; Mirjankar, Nikhil; Stoian, Razvan-Ionut; White, Collin; Allen, Matthew; Sandercock, Mark P; Lavine, Barry K

    2015-01-01

    Pattern recognition techniques have been developed to search the infrared (IR) spectral libraries of the paint data query (PDQ) database to differentiate between similar but nonidentical IR clear coat paint spectra. The library search system consists of two separate but interrelated components: search prefilters to reduce the size of the IR library to a specific assembly plant or plants corresponding to the unknown paint sample and a cross-correlation searching algorithm to identify IR spectra most similar to the unknown in the subset of spectra identified by the prefilters. To develop search prefilters with the necessary degree of accuracy, IR spectra from the PDQ database were preprocessed using wavelets to enhance subtle but significant features in the data. Wavelet coefficients characteristic of the assembly plant of the vehicle were identified using a genetic algorithm for pattern recognition and feature selection. A search algorithm was then used to cross-correlate the unknown with each IR spectrum in the subset of library spectra identified by the search prefilters. Each cross-correlated IR spectrum was simultaneously compared to an autocorrelated IR spectrum of the unknown using several spectral windows that span different regions of the cross-correlated and autocorrelated data from the midpoint. The top five hits identified in each search window are compiled, and a histogram is computed that summarizes the frequency of occurrence for each selected library sample. The five library samples with the highest frequency of occurrence are selected as potential hits. Even in challenging trials where the clear coat paint samples evaluated were all the same make (e.g., General Motors) within a limited production year range, the model of the automobile from which the unknown paint sample was obtained could be identified from its IR spectrum. PMID:25506887

  4. Visualization of flow patterns induced by an impinging jet issuing from a circular planform

    NASA Astrophysics Data System (ADS)

    Saripalli, K. R.

    1983-12-01

    A four-jet impingement flow with application to high-performance VTOL aircraft is investigated. Flow visualization studies were conducted with water as the working medium. Photographs of different cross sections of the flow are presented to describe the properties of the fountain upwash and the stagnation-line patterns. The visualization technique involves the introduction of fluorescein-sodium, a fluorescent dye, into the jet flow and illumination by a sheet of light obtained by spreading a laser beam. Streak-line photographs were also taken using air bubbles as tracer particles. The strength and orientation of the fountain(s) were observed for different heights of the nozzle configuration above the ground and inclination angles of the forward nozzles.

  5. The development of visual search in infancy: Attention to faces versus salience.

    PubMed

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J; Oakes, Lisa M

    2016-04-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and 8-month-old infants looked first and longest at faces; their looking was not strongly influenced by physical salience. In contrast, 4-month-old infants showed a visual preference for the face only when the arrays contained 2 items and the competitor was relatively low in salience. When the arrays contained many items or the only competitor was relatively high in salience, 4-month-old infants' looks were more often directed at the most salient item. Thus, over ages of 4 to 8 months, physical salience has a decreasing influence and faces have an increasing influence on where and how long infants look. (PsycINFO Database Record PMID:26866728

  6. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  7. Children with Fetal Alcohol Syndrome and Fetal Alcohol Effects: Patterns of Performance on IQ and Visual Motor Ability.

    ERIC Educational Resources Information Center

    Kopera-Frye, Karen; Zielinski, Sharon

    This study explored relationships between intelligence and visual motor ability and patterns of impairment of visual motor ability in children prenatally affected by alcohol. Fourteen children (mean age 8.2 years) diagnosed with fetal alcohol syndrome (FAS) and 50 children with possible fetal alcohol effects (FAE) were assessed with the Bender…

  8. Differential Roles of the Fan-Shaped Body and the Ellipsoid Body in "Drosophila" Visual Pattern Memory

    ERIC Educational Resources Information Center

    Pan, Yufeng; Zhou, Yanqiong; Guo, Chao; Gong, Haiyun; Gong, Zhefeng; Liu, Li

    2009-01-01

    The central complex is a prominent structure in the "Drosophila" brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for "Drosophila" visual pattern memory. However, little is known…

  9. The effect of stimulus duration and motor response in hemispatial neglect during a visual search task.

    PubMed

    Jelsone-Swain, Laura M; Smith, David V; Baylis, Gordon C

    2012-01-01

    Patients with hemispatial neglect exhibit a myriad of profound deficits. A hallmark of this syndrome is the patients' absence of awareness of items located in their contralesional space. Many studies, however, have demonstrated that neglect patients exhibit some level of processing of these neglected items. It has been suggested that unconscious processing of neglected information may manifest as a fast denial. This theory of fast denial proposes that neglected stimuli are detected in the same way as non-neglected stimuli, but without overt awareness. We evaluated the fast denial theory by conducting two separate visual search task experiments, each differing by the duration of stimulus presentation. Specifically, in Experiment 1 each stimulus remained in the participants' visual field until a response was made. In Experiment 2 each stimulus was presented for only a brief duration. We further evaluated the fast denial theory by comparing verbal to motor task responses in each experiment. Overall, our results from both experiments and tasks showed no evidence for the presence of implicit knowledge of neglected stimuli. Instead, patients with neglect responded the same when they neglected stimuli as when they correctly reported stimulus absence. These findings thus cast doubt on the concept of the fast denial theory and its consequent implications for non-conscious processing. Importantly, our study demonstrated that the only behavior affected was during conscious detection of ipsilesional stimuli. Specifically, patients were slower to detect stimuli in Experiment 1 compared to Experiment 2, suggesting a duration effect occurred during conscious processing of information. Additionally, reaction time and accuracy were similar when reporting verbally versus motorically. These results provide new insights into the perceptual deficits associated with neglect and further support other work that falsifies the fast denial account of non-conscious processing in hemispatial visual neglect. PMID:22662149

  10. Adaptive methods of two-scale edge detection in post-enhancement visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2008-04-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvo-cellular (P) and magno-cellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong non-linear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process, and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information, and gracefully shifts from a smallscale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the (P) and (M) channels of the human visual system. We also examine the case of adapting to a third image condition, namely too little visual information, and automatically adjust edge detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  11. Model-based analysis of pattern motion processing in mouse primary visual cortex.

    PubMed

    Muir, Dylan R; Roth, Morgane M; Helmchen, Fritjof; Kampa, Björn M

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738

  12. Adaptation in the Visual Cortex: Influence of Membrane Trajectory and Neuronal Firing Pattern on Slow Afterpotentials

    PubMed Central

    Descalzo, Vanessa F.; Gallego, Roberto; Sanchez-Vives, Maria V.

    2014-01-01

    The input/output relationship in primary visual cortex neurons is influenced by the history of the preceding activity. To understand the impact that membrane potential trajectory and firing pattern has on the activation of slow conductances in cortical neurons we compared the afterpotentials that followed responses to different stimuli evoking similar numbers of action potentials. In particular, we compared afterpotentials following the intracellular injection of either square or sinusoidal currents lasting 20 seconds. Both stimuli were intracellular surrogates of different neuronal responses to prolonged visual stimulation. Recordings from 99 neurons in slices of visual cortex revealed that for stimuli evoking an equivalent number of spikes, sinusoidal current injection activated a slow afterhyperpolarization of significantly larger amplitude (8.5±3.3 mV) and duration (33±17 s) than that evoked by a square pulse (6.4±3.7 mV, 28±17 s; p<0.05). Spike frequency adaptation had a faster time course and was larger during plateau (square pulse) than during intermittent (sinusoidal) depolarizations. Similar results were obtained in 17 neurons intracellularly recorded from the visual cortex in vivo. The differences in the afterpotentials evoked with both protocols were abolished by removing calcium from the extracellular medium or by application of the L-type calcium channel blocker nifedipine, suggesting that the activation of a calcium-dependent current is at the base of this afterpotential difference. These findings suggest that not only the spikes, but the membrane potential values and firing patterns evoked by a particular stimulation protocol determine the responses to any subsequent incoming input in a time window that spans for tens of seconds to even minutes. PMID:25380063

  13. Model-based analysis of pattern motion processing in mouse primary visual cortex

    PubMed Central

    Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738

  14. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) in the early stages of Alzheimer’s disease

    PubMed Central

    Lubiński, Wojciech; Potemkowski, Andrzej; Honczarenko, Krystyna

    2010-01-01

    Alzheimer’s disease (AD) is one of the most common causes of dementia in the world. Patients with AD frequently complain of vision disturbances that do not manifest as changes in routine ophthalmological examination findings. The main causes of these disturbances are neuropathological changes in the visual cortex, although abnormalities in the retina and optic nerve cannot be excluded. Pattern electroretinogram (PERG) and pattern visual evoked potential (PVEP) tests are commonly used in ophthalmology to estimate bioelectrical function of the retina and optic nerve. The aim of this study was to determine whether retinal and optic nerve function, measured by PERG and PVEP tests, is changed in individuals in the early stages of AD with normal routine ophthalmological examination results. Standard PERG and PVEP tests were performed in 30 eyes of 30 patients with the early stages of AD. The results were compared to 30 eyes of 30 normal healthy controls. PERG and PVEP tests were recorded in accordance with the International Society for Clinical Electrophysiology of Vision (ISCEV) standards. Additionally, neural conduction was measured using retinocortical time (RCT)—the difference between P100-wave latency in PVEP and P50-wave implicit time in PERG. In PERG test, PVEP test, and RCT, statistically significant changes were detected. In PERG examination, increased implicit time of P50-wave (P < 0.03) and amplitudes reductions in P50- and N95-waves (P < 0.0001) were observed. In PVEP examination, increased latency of P100-wave (P < 0.0001) was found. A significant increase in RCT (P < 0.0001) was observed. The most prevalent features were amplitude reduction in N95-wave and increased latency of P100-wave which were seen in 56.7% (17/30) of the AD eyes. In patients with the early stages of AD and normal routine ophthalmological examination results, dysfunction of the retinal ganglion cells as well as of the optic nerve is present, as detected by PERG and PVEP tests. These dysfunctions, at least partially, explain the cause of visual disturbances observed in patients with the early stages of AD. PMID:20549299

  15. Previously seen and expected stimuli elicit surprise in the context of visual search.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-04-01

    In the context of visual search, surprise is the phenomenon by which a previously unseen and unexpected stimulus exogenously attracts spatial attention. Capture by such a stimulus occurs, by definition, independent of task goals and is thought to be dependent on the extent to which the stimulus deviates from expectations. However, the relative contributions of prior-exposure and explicit knowledge of an unexpected event to the surprise response have not yet been systematically investigated. Here observers searched for a specific color while ignoring irrelevant cues of different colors presented prior to the target display. After a brief familiarization period, we presented an irrelevant motion cue to elicit surprise. Across conditions we varied prior exposure to the motion stimulus - seen versus unseen - and top-down expectations of occurrence - expected versus unexpected - to assess the extent to which each of these factors contributes to surprise. We found no attenuation of the surprise response when observers were pre-exposed to the motion cue and or had explicit knowledge of its occurrence. Our results show that it is neither sufficient nor necessary that a stimulus be new and unannounced to elicit surprise and suggest that the expectations that determine the surprise response are highly context specific. PMID:26742498

  16. Tests of a 3D visual-search model observer for SPECT

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2013-03-01

    Observer studies with single 2D images can bias assessments of diagnostic technologies, as physicians usually have access to an entire image volume presented as multiple slices in multiple views. Previously, we introduced a scanning model observer for detection-localization tasks with multislice-multiview (or volumetric) display, but this observer did not compare well against human-observer data. The current work continues our investigation with tests of a 3D visual-search (VS) model observer. The VS framework amounts to an initial holistic search that identifies suspicious locations for analysis by a statistical observer. Our VS model uses a scanning observer for the analysis. The VS model was evaluated against the scanning and human observers in a localization ROC study of mass detection in SPECT lung imaging. The study compared two iterative reconstruction strategies that applied different combinations of corrections for attenuation, scatter, and distance-dependent system resolution. In our earlier work, the scanning and human observers ranked the strategies in opposite order of performance. The ranking from the VS observer matched that of the humans.

  17. Visual search, movement behaviour and boat control during the windward mark rounding in sailing.

    PubMed

    Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Savelsbergh, Geert J P

    2015-01-01

    In search of key-performance predictors in sailing, we examined to what degree visual search, movement behaviour and boat control contribute to skilled performance while rounding the windward mark. To this end, we analysed 62 windward mark roundings sailed without opponents and 40 windward mark roundings sailed with opponents while competing in small regattas. Across conditions, results revealed that better performances were related to gazing more to the tangent point during the actual rounding. More specifically, in the condition without opponents, skilled performance was associated with gazing more outside the dinghy during the actual rounding, while in the condition with opponents, superior performance was related to gazing less outside the dinghy. With respect to movement behaviour, superior performance was associated with the release of the trimming lines close to rounding the mark. In addition, better performances were related to approaching the mark with little heel, yet heeling the boat more to the windward side when being close to the mark. Potential implications for practice are suggested for each phase of the windward mark rounding. PMID:25105956

  18. The role of pattern recognition in creative problem solving: a case study in search of new mathematics for biology.

    PubMed

    Hong, Felix T

    2013-09-01

    Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach is in line with Campbell's evolutionary epistemology. Instead of treating science as immutable Natural Laws, which already existed and which were just waiting to be discovered, scientific theories are regarded as humans' mental constructs, which must be invented to reconcile with observed natural phenomena. In this way, the pursuit of science is shifted from diligent and systematic (or random) searching for existing Natural Laws to firing up humans' imagination to comprehend Nature's behavioral pattern. The insights gained in understanding human creativity indicated that new mathematics that is capable of handling effectively parallel processing and human subjectivity is sorely needed. The past classification of formalizability vs. non-formalizability was made in reference to contemporary mathematics. Rosen's conclusion did not preclude future inventions of new biology-friendly mathematics. PMID:23597605

  19. Gaze patterns predicting successful collision avoidance in patients with homonymous visual field defects.

    PubMed

    Papageorgiou, Eleni; Hardiess, Gregor; Mallot, Hanspeter A; Schiefer, Ulrich

    2012-07-15

    Aim of the present study was to identify efficient compensatory gaze patterns applied by patients with homonymous visual field defects (HVFDs) under virtual reality (VR) conditions in a dynamic collision avoidance task. Thirty patients with HVFDs due to vascular brain lesions and 30 normal subjects performed a collision avoidance task with moving objects at an intersection under two difficulty levels. Based on their performance (i.e. the number of collisions), patients were assigned to either an "adequate" (HVFD(A)) or "inadequate" (HVFD(I)) subgroup by the median split method. Eye and head tracking data were available for 14 patients and 19 normal subjects. Saccades, fixations, mean number of gaze shifts, scanpath length and the mean gaze eccentricity, were compared between HVFD(A), HVFD(I) patients and normal subjects. For both difficulty levels, the gaze patterns of HVFD(A) patients (N=5) compared to HVFD(I) patients (N=9) were characterized by longer saccadic amplitudes towards both the affected and the intact side, larger mean gaze eccentricity, more gaze shifts, longer scanpaths and more fixations on vehicles but fewer fixations on the intersection. Both patient groups displayed more fixations in the affected compared to the intact hemifield. Fixation number, fixation duration, scanpath length, and number of gaze shifts were similar between HVFD(A) patients and normal subjects. Patients with HVFDs who adapt successfully to their visual deficit, display distinct gaze patterns characterized by increased exploratory eye and head movements, particularly towards moving objects of interest on their blind side. In the context of a dynamic environment, efficient compensation in patients with HVFDs is possible by means of gaze scanning. This strategy allows continuous update of the moving objects' spatial location and selection of the task-relevant ones, which will be represented in visual working memory. PMID:22721638

  20. Retinotopic Patterns of Correlated Fluctuations in Visual Cortex Reflect the Dynamics of Spontaneous Perceptual Suppression

    PubMed Central

    Donner, Tobias H.; Sagi, Dov; Bonneh, Yoram S.; Heeger, David J.

    2013-01-01

    While viewing certain stimuli, perception changes spontaneously in the face of constant input. For example, during “motion induced blindness” (MIB), a small salient target spontaneously disappears and reappears when surrounded by a moving mask. Models of such bistable perceptual phenomena posit spontaneous fluctuations in neuronal activity throughout multiple stages of the visual cortical hierarchy. We used fMRI in humans to link correlated activity fluctuations across human visual cortical areas V1 through V4 to the dynamics (rate and duration) of MIB target disappearance. We computed the correlations between the time series of fMRI activity in multiple retinotopic sub-regions corresponding to MIB target and mask. Linear decomposition of the matrix of temporal correlations revealed spatial patterns of activity fluctuations, irrespective of whether or not these were time-locked to behavioral reports of target disappearance. The spatial pattern that dominated the activity fluctuations during MIB was spatially non-specific, shared by all sub-regions, but did not reflect the dynamics of perception. By contrast, the fluctuations associated with the rate of MIB disappearance were retinotopically-specific for the target sub-region in V4, and the fluctuations associated with the duration of MIB disappearance states were target-specific in V1. Target-specific fluctuations in V1 have not previously been identified by averaging activity time-locked to behavioral reports of MIB disappearance. Our results suggest that different levels of the visual cortical hierarchy shape the dynamics of perception via distinct mechanisms, which are evident in distinct spatial patterns of spontaneous cortical activity fluctuations. PMID:23365254