Sample records for rapid visual information

  1. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  2. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  3. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  4. Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study

    PubMed Central

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235

  5. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  6. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  7. Applications of aerospace technology in industry: A technology transfer profile. Visual display systems

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.

  8. Innovative Didactic Designs: Visual Analytics and Visual Literacy in School

    ERIC Educational Resources Information Center

    Stenliden, Linnéa; Nissen, Jörgen; Bodén, Ulrika

    2017-01-01

    In a world of massively mediated information and communication, students must learn to handle rapidly growing information volumes inside and outside school. Pedagogy attuned to processing this growing production and communication of information is needed. However, ordinary educational models often fail to support students, trialing neither…

  9. Saccadic Eye Movements Impose a Natural Bottleneck on Visual Short-Term Memory

    ERIC Educational Resources Information Center

    Ohl, Sven; Rolfs, Martin

    2017-01-01

    Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory…

  10. The Characteristics and Limits of Rapid Visual Categorization

    PubMed Central

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180

  11. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  12. Numbers, Pictures, and Politics: Teaching Research Methods through Data Visualizations

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2015-01-01

    Data visualization is the term used to describe the methods and technologies used to allow the exploration and communication of quantitative information graphically. Data visualization is a rapidly growing and evolving discipline, and visualizations are widely used to cover politics. Yet, while popular and scholarly publications widely use…

  13. Anomalous visual experiences, negative symptoms, perceptual organization and the magnocellular pathway in schizophrenia: a shared construct?

    PubMed

    Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán

    2005-10-01

    Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.

  14. A Visual Profile of Queensland Indigenous Children.

    PubMed

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2016-03-01

    Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance.

  15. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  16. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  17. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  18. Short-term memory for figure-ground organization in the visual cortex.

    PubMed

    O'Herron, Philip; von der Heydt, Rüdiger

    2009-03-12

    Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here, we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly--a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes.

  19. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  20. Graphic Design for the Computer Age; Visual Communication for all Media.

    ERIC Educational Resources Information Center

    Hamilton, Edward A.

    Because of the rapid pace of today's world, graphic designs which communicate at a glance are needed in all information areas. The essays in this book deal with various aspects of graphic design. These brief essays, each illustrated with graphics, concern the following topics: a short history of visual communication, information design, the merits…

  1. Fast visual prediction and slow optimization of preferred walking speed.

    PubMed

    O'Connor, Shawn M; Donelan, J Maxwell

    2012-05-01

    People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.

  2. Asymmetries in the Control of Saccadic Eye Movements to Bifurcating Targets.

    ERIC Educational Resources Information Center

    Zeevi, Yehoshua Y.; And Others

    The examination of saccadic eye movements--rapid shifts in gaze from one visual area of interest to another--is useful in studying pilot's visual learning in flight simulator training. Saccadic eye movements are the basic oculomotor response associated with the acquisition of visual information and provide an objective measure of higher perceptual…

  3. Robust selectivity to two-object images in human visual cortex

    PubMed Central

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  4. Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information.

    PubMed

    Waldhauser, Gerd T; Braun, Verena; Hanslmayr, Simon

    2016-01-06

    Episodic memory retrieval is assumed to rely on the rapid reactivation of sensory information that was present during encoding, a process termed "ecphory." We investigated the functional relevance of this scarcely understood process in two experiments in human participants. We presented stimuli to the left or right of fixation at encoding, followed by an episodic memory test with centrally presented retrieval cues. This allowed us to track the reactivation of lateralized sensory memory traces during retrieval. Successful episodic retrieval led to a very early (∼100-200 ms) reactivation of lateralized alpha/beta (10-25 Hz) electroencephalographic (EEG) power decreases in the visual cortex contralateral to the visual field at encoding. Applying rhythmic transcranial magnetic stimulation to interfere with early retrieval processing in the visual cortex led to decreased episodic memory performance specifically for items encoded in the visual field contralateral to the site of stimulation. These results demonstrate, for the first time, that episodic memory functionally relies on very rapid reactivation of sensory information. Remembering personal experiences requires a "mental time travel" to revisit sensory information perceived in the past. This process is typically described as a controlled, relatively slow process. However, by using electroencephalography to measure neural activity with a high time resolution, we show that such episodic retrieval entails a very rapid reactivation of sensory brain areas. Using transcranial magnetic stimulation to alter brain function during retrieval revealed that this early sensory reactivation is causally relevant for conscious remembering. These results give first neural evidence for a functional, preconscious component of episodic remembering. This provides new insight into the nature of human memory and may help in the understanding of psychiatric conditions that involve the automatic intrusion of unwanted memories. Copyright © 2016 the authors 0270-6474/16/360251-10$15.00/0.

  5. Temporal dynamics of encoding, storage and reallocation of visual working memory

    PubMed Central

    Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud

    2012-01-01

    The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here we examine the temporal evolution of memory resolution, based on observers’ ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory, and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cueing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event, but was maintained if it indicated an object of particular relevance to the task. These cueing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information. PMID:21911739

  6. Temporal dynamics of encoding, storage, and reallocation of visual working memory.

    PubMed

    Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud

    2011-09-12

    The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here, we examine the temporal evolution of memory resolution, based on observers' ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cuing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event but was maintained if it indicated an object of particular relevance to the task. These cuing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information.

  7. Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies

    DTIC Science & Technology

    2011-10-01

    is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies

  8. Is cross-modal integration of emotional expressions independent of attentional resources?

    PubMed

    Vroomen, J; Driver, J; de Gelder, B

    2001-12-01

    In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

  9. Short-Term Memory for Figure-Ground Organization in the Visual Cortex

    PubMed Central

    O’Herron, Philip; von der Heydt, Rüdiger

    2009-01-01

    Summary Whether the visual system uses a buffer to store image information and the duration of that storage have been debated intensely in recent psychophysical studies. The long phases of stable perception of reversible figures suggest a memory that persists for seconds. But persistence of similar duration has not been found in signals of the visual cortex. Here we show that figure-ground signals in the visual cortex can persist for a second or more after the removal of the figure-ground cues. When new figure-ground information is presented, the signals adjust rapidly, but when a figure display is changed to an ambiguous edge display, the signals decay slowly – a behavior that is characteristic of memory devices. Figure-ground signals represent the layout of objects in a scene, and we propose that a short-term memory for object layout is important in providing continuity of perception in the rapid stream of images flooding our eyes. PMID:19285475

  10. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  11. The Electronic Classroom.

    ERIC Educational Resources Information Center

    Mueller, Richard J.

    Current computerized electronic technology is making possible, not only the broad and rapid distribution of information, but also its manipulation, analysis, synthesis, and recombination. The shift from print to a combination of visual and oral expression is being propelled by the mass media, and visual literacy is both a concept and an…

  12. A Rapid Subcortical Amygdala Route for Faces Irrespective of Spatial Frequency and Emotion.

    PubMed

    McFadyen, Jessica; Mermillod, Martial; Mattingley, Jason B; Halász, Veronika; Garrido, Marta I

    2017-04-05

    There is significant controversy over the existence and function of a direct subcortical visual pathway to the amygdala. It is thought that this pathway rapidly transmits low spatial frequency information to the amygdala independently of the cortex, and yet the directionality of this function has never been determined. We used magnetoencephalography to measure neural activity while human participants discriminated the gender of neutral and fearful faces filtered for low or high spatial frequencies. We applied dynamic causal modeling to demonstrate that the most likely underlying neural network consisted of a pulvinar-amygdala connection that was uninfluenced by spatial frequency or emotion, and a cortical-amygdala connection that conveyed high spatial frequencies. Crucially, data-driven neural simulations revealed a clear temporal advantage of the subcortical connection over the cortical connection in influencing amygdala activity. Thus, our findings support the existence of a rapid subcortical pathway that is nonselective in terms of the spatial frequency or emotional content of faces. We propose that that the "coarseness" of the subcortical route may be better reframed as "generalized." SIGNIFICANCE STATEMENT The human amygdala coordinates how we respond to biologically relevant stimuli, such as threat or reward. It has been postulated that the amygdala first receives visual input via a rapid subcortical route that conveys "coarse" information, namely, low spatial frequencies. For the first time, the present paper provides direction-specific evidence from computational modeling that the subcortical route plays a generalized role in visual processing by rapidly transmitting raw, unfiltered information directly to the amygdala. This calls into question a widely held assumption across human and animal research that fear responses are produced faster by low spatial frequencies. Our proposed mechanism suggests organisms quickly generate fear responses to a wide range of visual properties, heavily implicating future research on anxiety-prevention strategies. Copyright © 2017 the authors 0270-6474/17/373864-11$15.00/0.

  13. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  14. Explore the virtual side of earth science

    USGS Publications Warehouse

    ,

    1998-01-01

    Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).

  15. Using Visualization in Cockpit Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  16. Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.

    ERIC Educational Resources Information Center

    Boden, Catherine; Brodeur, Darlene A.

    1999-01-01

    A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…

  17. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  18. 77 FR 69899 - Public Conference on Geographic Information Systems (GIS) in Transportation Safety

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-21

    ... NATIONAL TRANSPORTATION SAFETY BOARD Public Conference on Geographic Information Systems (GIS) in... Geographic Information Systems (GIS) in transportation safety on December 4-5, 2012. GIS is a rapidly... visualization of data. The meeting will bring researchers and practitioners in transportation safety and GIS...

  19. Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    ERIC Educational Resources Information Center

    Wang, Huadong

    2013-01-01

    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future…

  20. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  1. Perceiving groups: The people perception of diversity and hierarchy.

    PubMed

    Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L

    2018-05-01

    The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Visual Stimuli Evoked Action Potentials Trigger Rapidly Propagating Dendritic Calcium Transients in the Frog Optic Tectum Layer 6 Neurons.

    PubMed

    Svirskis, Gytis; Baranauskas, Gytis; Svirskiene, Natasa; Tkatch, Tatiana

    2015-01-01

    The superior colliculus in mammals or the optic tectum in amphibians is a major visual information processing center responsible for generation of orientating responses such as saccades in monkeys or prey catching avoidance behavior in frogs. The conserved structure function of the superior colliculus the optic tectum across distant species such as frogs, birds monkeys permits to draw rather general conclusions after studying a single species. We chose the frog optic tectum because we are able to perform whole-cell voltage-clamp recordings fluorescence imaging of tectal neurons while they respond to a visual stimulus. In the optic tectum of amphibians most visual information is processed by pear-shaped neurons possessing long dendritic branches, which receive the majority of synapses originating from the retinal ganglion cells. Since the first step of the retinal input integration is performed on these dendrites, it is important to know whether this integration is enhanced by active dendritic properties. We demonstrate that rapid calcium transients coinciding with the visual stimulus evoked action potentials in the somatic recordings can be readily detected up to the fine branches of these dendrites. These transients were blocked by calcium channel blockers nifedipine CdCl2 indicating that calcium entered dendrites via voltage-activated L-type calcium channels. The high speed of calcium transient propagation, >300 μm in <10 ms, is consistent with the notion that action potentials, actively propagating along dendrites, open voltage-gated L-type calcium channels causing rapid calcium concentration transients in the dendrites. We conclude that such activation by somatic action potentials of the dendritic voltage gated calcium channels in the close vicinity to the synapses formed by axons of the retinal ganglion cells may facilitate visual information processing in the principal neurons of the frog optic tectum.

  3. Visual short-term memory guides infants' visual attention.

    PubMed

    Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M

    2018-08-01

    Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. The development of individuation in autism

    PubMed Central

    O'Hearn, Kirsten; Franconeri, Steven; Wright, Catherine; Minshew, Nancy; Luna, Beatriz

    2012-01-01

    Evidence suggests that people with autism use holistic information differently than typical adults. The current studies examine this possibility by investigating how core visual processes that contribute to holistic processing – individuation and element grouping – develop in participants with autism and typically developing (TD) participants matched for age, IQ and gender. Individuation refers to the ability to `see' up to 4 elements simultaneously; grouping these elements can change the number of elements that are rapidly apprehended. We examined these core processes using two well-established paradigms, rapid enumeration and multiple object tracking (MOT). In both tasks, a performance limit of about 4 elements in adulthood is thought to reflect individuation capacity. Participants with autism has a smaller individuation capacity than TD controls, regardless of whether they were enumerating static elements or tracking moving ones. To manipulate holistic information and individuation performance, we grouped the elements into a design or had elements move together. Participants with autism were affected to a similar degree as TD participants by the holistic information, whether the manipulation helped or hurt performance, consistent with evidence that some types of gestalt/grouping information are processed typically in autism. There was substantial development in autism from childhood to adolescence, but not from adolescence to adulthood, a pattern distinct from TD participants. These results provide important information about core visual processes in autism, as well as insight into the architecture of vision (e.g., individuation appears distinct from visual strengths in autism, such as visual search, despite similarities). PMID:22963232

  5. Dynamic information processing states revealed through neurocognitive models of object semantics

    PubMed Central

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  6. Enhancing astronaut performance using sensorimotor adaptability training

    PubMed Central

    Bloomberg, Jacob J.; Peters, Brian T.; Cohen, Helen S.; Mulavara, Ajitkumar P.

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments—enhancing their ability to “learn to learn.” We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts. PMID:26441561

  7. Enhancing astronaut performance using sensorimotor adaptability training.

    PubMed

    Bloomberg, Jacob J; Peters, Brian T; Cohen, Helen S; Mulavara, Ajitkumar P

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments-enhancing their ability to "learn to learn." We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts.

  8. Correspondence of presaccadic activity in the monkey primary visual cortex with saccadic eye movements

    PubMed Central

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.

    2004-01-01

    We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334

  9. Modulation of Attentional Blink with Emotional Faces in Typical Development and in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yerys, Benjamin E.; Ruiz, Ericka; Strang, John; Sokoloff, Jennifer; Kenworthy, Lauren; Vaidya, Chandan J.

    2013-01-01

    Background: The attentional blink (AB) phenomenon was used to assess the effect of emotional information on early visual attention in typically developing (TD) children and children with autism spectrum disorders (ASD). The AB effect is the momentary perceptual unawareness that follows target identification in a rapid serial visual processing…

  10. Development of a Disaster Information Visualization Dashboard: A Case Study of Three Typhoons in Taiwan in 2016

    NASA Astrophysics Data System (ADS)

    Su, Wen-Ray; Tsai, Yuan-Fan; Huang, Kuei-Chin; Hsieh, Ching-En

    2017-04-01

    To facilitate disaster response and enhance the effectiveness of disaster prevention and relief, people and emergency response personnel should be able to rapidly acquire and understand information when disasters occur. However, in existing disaster platforms information is typically presented in text tables, static charts, and maps with points. These formats do not make it easy for users to understand the overall situation. Therefore, this study converts data into human-readable charts by using data visualization techniques, and builds a disaster information dashboard that is concise, attractive and flexible. This information dashboard integrates temporally and spatially correlated data, disaster statistics according to category and county, lists of disasters, and any other relevant information. The graphs are animated and interactive. The dashboard allows users to filter the data according to their needs and thus to assimilate the information more rapidly. In this study, we applied the information dashboard to the analysis of landslides during three typhoon events in 2016: Typhoon Nepartak, Typhoon Meranti and Typhoon Megi. According to the statistical results in the dashboard, the order of frequency of the disaster categories in all three events combined was rock fall, roadbed loss, slope slump, road blockage and debris flow. Disasters occurred mainly in the areas that received the most rainfall. Typhoons Nepartak and Meranti mainly affected Taitung, and Typhoon Megi mainly affected Kaohsiung. The towns Xiulin, Fengbin, Fenglin and Guangfu in Hualian County were all issued with debris flow warnings in all three typhoon events. The disaster information dashboard developed in this study allows the user to rapidly assess the overall disaster situation. It clearly and concisely reveals interactions between time, space and disaster type, and also provides comprehensive details about the disaster. The dashboard provides a foundation for future disaster visualization, since it can combine and present real-time information of various types; as such it will strengthen decision making in disaster prevention management.

  11. Impact of experience when using the Rapid Upper Limb Assessment to assess postural risk in children using information and communication technologies.

    PubMed

    Chen, Janice D; Falkmer, Torbjörn; Parsons, Richard; Buzzard, Jennifer; Ciccarelli, Marina

    2014-05-01

    The Rapid Upper Limb Assessment (RULA) is an observation-based screening tool that has been used to assess postural risks of children in school settings. Studies using eye-tracking technology suggest that visual search strategies are influenced by experience in the task performed. This study investigated if experience in postural risk assessments contributed to differences in outcome scores on the RULA and the visual search strategies utilized. While wearing an eye-tracker, 16 student occupational therapists and 16 experienced occupational therapists used the RULA to assess 11 video scenarios of a child using different mobile information and communication technologies (ICT) in the home environment. No significant differences in RULA outcome scores, and no conclusive differences in visual search strategies between groups were found. RULA can be used as a screening tool for postural risks following a short training session regardless of the assessor's experience in postural risk assessments. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  13. Surfing a spike wave down the ventral stream.

    PubMed

    VanRullen, Rufin; Thorpe, Simon J

    2002-10-01

    Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.

  14. Immunological multimetal deposition for rapid visualization of sweat fingerprints.

    PubMed

    He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin

    2014-11-10

    A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Rapid Forgetting Results from Competition over Time between Items in Visual Working Memory

    ERIC Educational Resources Information Center

    Pertzov, Yoni; Manohar, Sanjay; Husain, Masud

    2017-01-01

    Working memory is now established as a fundamental cognitive process across a range of species. Loss of information held in working memory has the potential to disrupt many aspects of cognitive function. However, despite its significance, the mechanisms underlying rapid forgetting remain unclear, with intense recent debate as to whether it is…

  16. The Representation of Information about Faces in the Temporal and Frontal Lobes

    ERIC Educational Resources Information Center

    Rolls, Edmund T.

    2007-01-01

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed…

  17. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  18. Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.

    PubMed

    Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R

    2001-11-01

    The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.

  19. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  20. Survey on the Sources of Information in Science, Technology and Commerce in the State of Penang, Malaysia

    ERIC Educational Resources Information Center

    Tee, Lim Huck; Fong, Tang Wan

    1973-01-01

    Penang, Malaysia is undergoing rapid industrialization to stimulate its economy. A survey was conducted to determine what technical, scientific, and commercial information sources were available. Areas covered in the survey were library facilities, journals, commercial reference works and audio-visual materials. (DH)

  1. Consequences of cognitive impairments following traumatic brain injury: Pilot study on visual exploration while driving.

    PubMed

    Milleville-Pennel, Isabelle; Pothier, Johanna; Hoc, Jean-Michel; Mathé, Jean-François

    2010-01-01

    The aim was to assess the visual exploration of a person suffering from traumatic brain injury (TBI). It was hypothesized that visual exploration could be modified as a result of attentional or executive function deficits that are often observed following brain injury. This study compared an analysis of eyes movements while driving with data from neuropsychological tests. Five participants suffering from TBI and six control participants took part in this study. All had good driving experience. They were invited to drive on a fixed-base driving simulator. Eye fixations were recorded using an eye tracker. Neuropsychological tests were used to assess attention, working memory, rapidity of information processing and executive functions. Participants with TBI showed a reduction in the variety of the visual zones explored and a reduction of the distance of exploration. Moreover, neuropsychological evaluation indicates that there were difficulties in terms of divided attention, anticipation and planning. There is a complementarity of the information obtained. Tests give information about cognitive deficiencies but not about their translation into a dynamic situation. Conversely, visual exploration provides information about the dynamic with which information is picked up in the environment but not about the cognitive processes involved.

  2. Ultrafast scene detection and recognition with limited visual information

    PubMed Central

    Hagmann, Carl Erick; Potter, Mary C.

    2016-01-01

    Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263

  3. The functional architecture of the ventral temporal cortex and its role in categorization

    PubMed Central

    Grill-Spector, Kalanit; Weiner, Kevin S.

    2014-01-01

    Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370

  4. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  5. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  6. Rank Order Coding: a Retinal Information Decoding Strategy Revealed by Large-Scale Multielectrode Array Retinal Recordings.

    PubMed

    Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne

    2016-01-01

    How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.

  7. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  8. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Evaluation and Verification of the Global Rapid Identification of Threats System for Infectious Diseases in Textual Data Sources.

    PubMed

    Huff, Andrew G; Breit, Nathan; Allen, Toph; Whiting, Karissa; Kiley, Christopher

    2016-01-01

    The Global Rapid Identification of Threats System (GRITS) is a biosurveillance application that enables infectious disease analysts to monitor nontraditional information sources (e.g., social media, online news outlets, ProMED-mail reports, and blogs) for infectious disease threats. GRITS analyzes these textual data sources by identifying, extracting, and succinctly visualizing epidemiologic information and suggests potentially associated infectious diseases. This manuscript evaluates and verifies the diagnoses that GRITS performs and discusses novel aspects of the software package. Via GRITS' web interface, infectious disease analysts can examine dynamic visualizations of GRITS' analyses and explore historical infectious disease emergence events. The GRITS API can be used to continuously analyze information feeds, and the API enables GRITS technology to be easily incorporated into other biosurveillance systems. GRITS is a flexible tool that can be modified to conduct sophisticated medical report triaging, expanded to include customized alert systems, and tailored to address other biosurveillance needs.

  10. Evaluation and Verification of the Global Rapid Identification of Threats System for Infectious Diseases in Textual Data Sources

    PubMed Central

    Breit, Nathan

    2016-01-01

    The Global Rapid Identification of Threats System (GRITS) is a biosurveillance application that enables infectious disease analysts to monitor nontraditional information sources (e.g., social media, online news outlets, ProMED-mail reports, and blogs) for infectious disease threats. GRITS analyzes these textual data sources by identifying, extracting, and succinctly visualizing epidemiologic information and suggests potentially associated infectious diseases. This manuscript evaluates and verifies the diagnoses that GRITS performs and discusses novel aspects of the software package. Via GRITS' web interface, infectious disease analysts can examine dynamic visualizations of GRITS' analyses and explore historical infectious disease emergence events. The GRITS API can be used to continuously analyze information feeds, and the API enables GRITS technology to be easily incorporated into other biosurveillance systems. GRITS is a flexible tool that can be modified to conduct sophisticated medical report triaging, expanded to include customized alert systems, and tailored to address other biosurveillance needs. PMID:27698665

  11. Rapid assessment of visual impairment (RAVI) in marine fishing communities in South India - study protocol and main findings

    PubMed Central

    2011-01-01

    Background Reliable data are a pre-requisite for planning eye care services. Though conventional cross sectional studies provide reliable information, they are resource intensive. A novel rapid assessment method was used to investigate the prevalence and causes of visual impairment and presbyopia in subjects aged 40 years and older. This paper describes the detailed methodology and study procedures of Rapid Assessment of Visual Impairment (RAVI) project. Methods A population-based cross-sectional study was conducted using cluster random sampling in the coastal region of Prakasam district of Andhra Pradesh in India, predominantly inhabited by fishing communities. Unaided, aided and pinhole visual acuity (VA) was assessed using a Snellen chart at a distance of 6 meters. The VA was re-assessed using a pinhole, if VA was < 6/12 in either eye. Near vision was assessed using N notation chart binocularly. Visual impairment was defined as presenting VA < 6/18 in the better eye. Presbyopia is defined as binocular near vision worse than N8 in subjects with binocular distance VA of 6/18 or better. Results The data collection was completed in <12 weeks using two teams each consisting of one paramedical ophthalmic personnel and two community eye health workers. The prevalence of visual impairment was 30% (95% CI, 27.6-32.2). This included 111 (7.1%; 95% CI, 5.8-8.4) individuals with blindness. Cataract was the leading cause of visual impairment followed by uncorrected refractive errors. The prevalence of blindness according to WHO definition (presenting VA < 3/60 in the better eye) was 2.7% (95% CI, 1.9-3.5). Conclusion There is a high prevalence of visual impairment in marine fishing communities in Prakasam district in India. The data from this rapid assessment survey can now be used as a baseline to start eye care services in this region. The rapid assessment methodology (RAVI) reported in this paper is robust, quick and has the potential to be replicated in other areas. PMID:21929802

  12. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  13. Guiding Principles for a Pediatric Neurology ICU (neuroPICU) Bedside Multimodal Monitor

    PubMed Central

    Eldar, Yonina C.; Gopher, Daniel; Gottlieb, Amihai; Lammfromm, Rotem; Mangat, Halinder S; Peleg, Nimrod; Pon, Steven; Rozenberg, Igal; Schiff, Nicholas D; Stark, David E; Yan, Peter; Pratt, Hillel; Kosofsky, Barry E

    2016-01-01

    Summary Background Physicians caring for children with serious acute neurologic disease must process overwhelming amounts of physiological and medical information. Strategies to optimize real time display of this information are understudied. Objectives Our goal was to engage clinical and engineering experts to develop guiding principles for creating a pediatric neurology intensive care unit (neuroPICU) monitor that integrates and displays data from multiple sources in an intuitive and informative manner. Methods To accomplish this goal, an international group of physicians and engineers communicated regularly for one year. We integrated findings from clinical observations, interviews, a survey, signal processing, and visualization exercises to develop a concept for a neuroPICU display. Results Key conclusions from our efforts include: (1) A neuroPICU display should support (a) rapid review of retrospective time series (i.e. cardiac, pulmonary, and neurologic physiology data), (b) rapidly modifiable formats for viewing that data according to the specialty of the reviewer, and (c) communication of the degree of risk of clinical decline. (2) Specialized visualizations of physiologic parameters can highlight abnormalities in multivariable temporal data. Examples include 3-D stacked spider plots and color coded time series plots. (3) Visual summaries of EEG with spectral tools (i.e. hemispheric asymmetry and median power) can highlight seizures via patient-specific “fingerprints.” (4) Intuitive displays should emphasize subsets of physiology and processed EEG data to provide a rapid gestalt of the current status and medical stability of a patient. Conclusions A well-designed neuroPICU display must present multiple datasets in dynamic, flexible, and informative views to accommodate clinicians from multiple disciplines in a variety of clinical scenarios. PMID:27437048

  14. Guiding Principles for a Pediatric Neurology ICU (neuroPICU) Bedside Multimodal Monitor: Findings from an International Working Group.

    PubMed

    Grinspan, Zachary M; Eldar, Yonina C; Gopher, Daniel; Gottlieb, Amihai; Lammfromm, Rotem; Mangat, Halinder S; Peleg, Nimrod; Pon, Steven; Rozenberg, Igal; Schiff, Nicholas D; Stark, David E; Yan, Peter; Pratt, Hillel; Kosofsky, Barry E

    2016-01-01

    Physicians caring for children with serious acute neurologic disease must process overwhelming amounts of physiological and medical information. Strategies to optimize real time display of this information are understudied. Our goal was to engage clinical and engineering experts to develop guiding principles for creating a pediatric neurology intensive care unit (neuroPICU) monitor that integrates and displays data from multiple sources in an intuitive and informative manner. To accomplish this goal, an international group of physicians and engineers communicated regularly for one year. We integrated findings from clinical observations, interviews, a survey, signal processing, and visualization exercises to develop a concept for a neuroPICU display. Key conclusions from our efforts include: (1) A neuroPICU display should support (a) rapid review of retrospective time series (i.e. cardiac, pulmonary, and neurologic physiology data), (b) rapidly modifiable formats for viewing that data according to the specialty of the reviewer, and (c) communication of the degree of risk of clinical decline. (2) Specialized visualizations of physiologic parameters can highlight abnormalities in multivariable temporal data. Examples include 3-D stacked spider plots and color coded time series plots. (3) Visual summaries of EEG with spectral tools (i.e. hemispheric asymmetry and median power) can highlight seizures via patient-specific "fingerprints." (4) Intuitive displays should emphasize subsets of physiology and processed EEG data to provide a rapid gestalt of the current status and medical stability of a patient. A well-designed neuroPICU display must present multiple datasets in dynamic, flexible, and informative views to accommodate clinicians from multiple disciplines in a variety of clinical scenarios.

  15. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  16. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  17. Research for the design of visual fatigue based on the computer visual communication

    NASA Astrophysics Data System (ADS)

    Deng, Hu-Bin; Ding, Bao-min

    2013-03-01

    With the era of rapid development of computer networks. The role of network communication in the social, economic, political, become more and more important and suggested their special role. The computer network communicat ion through the modern media and byway of the visual communication effect the public of the emotional, spiritual, career and other aspects of the life. While its rapid growth also brought some problems, It is that their message across to the public, its design did not pass a relat ively perfect manifestation to express the informat ion. So this not only leads to convey the error message, but also to cause the physical and psychological fatigue for the audiences. It is said that the visual fatigue. In order to reduce the fatigue when people obtain the useful information in using computer. Let the audience in a short time to obtain the most useful informat ion, this article gave a detailed account of its causes, and propose effective solutions and, through the specific examples to explain it, also in the future computer design visual communicat ion applications development prospect.

  18. Visual Communications and Image Processing

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1987-07-01

    This special issue of Optical Engineering is concerned with visual communications and image processing. The increase in communication of visual information over the past several decades has resulted in many new image processing and visual communication systems being put into service. The growth of this field has been rapid in both commercial and military applications. The objective of this special issue is to intermix advent technology in visual communications and image processing with ideas generated from industry, universities, and users through both invited and contributed papers. The 15 papers of this issue are organized into four different categories: image compression and transmission, image enhancement, image analysis and pattern recognition, and image processing in medical applications.

  19. Rapid visual grouping and figure-ground processing using temporally structured displays.

    PubMed

    Cheadle, Samuel; Usher, Marius; Müller, Hermann J

    2010-08-23

    We examine the time course of visual grouping and figure-ground processing. Figure (contour) and ground (random-texture) elements were flickered with different phases (i.e., contour and background are alternated), requiring the observer to group information within a pre-specified time window. It was found this grouping has a high temporal resolution: less than 20ms for smooth contours, and less than 50ms for line conjunctions with sharp angles. Furthermore, the grouping process takes place without an explicit knowledge of the phase of the elements, and it requires a cumulative build-up of information. The results are discussed in relation to the neural mechanism for visual grouping and figure-ground segregation. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Contributions of Low and High Spatial Frequency Processing to Impaired Object Recognition Circuitry in Schizophrenia

    PubMed Central

    Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.

    2013-01-01

    Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157

  1. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  2. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  3. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome

    PubMed Central

    Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle

    2013-01-01

    Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481

  5. A Crime Analysis Decision Support System for Crime Report Classification and Visualization

    ERIC Educational Resources Information Center

    Ku, Chih-Hao

    2012-01-01

    Today's Internet-based crime reporting systems make timely and anonymous crime reporting possible. However, these reports also result in a rapidly growing set of unstructured text files. Complicating the problem is that the information has not been filtered or guided in a detective-led interview resulting in much irrelevant information. To…

  6. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  7. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  8. The Vividness of Happiness in Dynamic Facial Displays of Emotion

    PubMed Central

    Becker, D. Vaughn; Neel, Rebecca; Srinivasan, Narayanan; Neufeld, Samantha; Kumar, Devpriya; Fouse, Shannon

    2012-01-01

    Rapid identification of facial expressions can profoundly affect social interactions, yet most research to date has focused on static rather than dynamic expressions. In four experiments, we show that when a non-expressive face becomes expressive, happiness is detected more rapidly anger. When the change occurs peripheral to the focus of attention, however, dynamic anger is better detected when it appears in the left visual field (LVF), whereas dynamic happiness is better detected in the right visual field (RVF), consistent with hemispheric differences in the processing of approach- and avoidance-relevant stimuli. The central advantage for happiness is nevertheless the more robust effect, persisting even when information of either high or low spatial frequency is eliminated. Indeed, a survey of past research on the visual search for emotional expressions finds better support for a happiness detection advantage, and the explanation may lie in the coevolution of the signal and the receiver. PMID:22247755

  9. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  10. Adult Visual Cortical Plasticity

    PubMed Central

    Gilbert, Charles D.; Li, Wu

    2012-01-01

    The visual cortex has the capacity for experience dependent change, or cortical plasticity, that is retained throughout life. Plasticity is invoked for encoding information during perceptual learning, by internally representing the regularities of the visual environment, which is useful for facilitating intermediate level vision - contour integration and surface segmentation. The same mechanisms have adaptive value for functional recovery after CNS damage, such as that associated with stroke or neurodegenerative disease. A common feature to plasticity in primary visual cortex (V1) is an association field that links contour elements across the visual field. The circuitry underlying the association field includes a plexus of long range horizontal connections formed by cortical pyramidal cells. These connections undergo rapid and exuberant sprouting and pruning in response to removal of sensory input, which can account for the topographic reorganization following retinal lesions. Similar alterations in cortical circuitry may be involved in perceptual learning, and the changes observed in V1 may be representative of how learned information is encoded throughout the cerebral cortex. PMID:22841310

  11. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  12. Knowledge Domain and Emerging Trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis.

    PubMed

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-01-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  13. Knowledge Domain and Emerging trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis

    NASA Astrophysics Data System (ADS)

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-09-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  14. Wired Widgets: Agile Visualization for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Gerschefske, K.; Witmer, J.

    2012-09-01

    Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.

  15. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    PubMed

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  16. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  17. Multi-brain fusion and applications to intelligence analysis

    NASA Astrophysics Data System (ADS)

    Stoica, A.; Matran-Fernandez, A.; Andreou, D.; Poli, R.; Cinel, C.; Iwashita, Y.; Padgett, C.

    2013-05-01

    In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.

  18. Cortical Neuroprosthesis Merges Visible and Invisible Light Without Impairing Native Sensory Function

    PubMed Central

    Thomson, Eric E.; Zea, Ivan; França, Wendy

    2017-01-01

    Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860

  19. Tracking the first two seconds: three stages of visual information processing?

    PubMed

    Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa

    2013-12-01

    We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.

  20. Perceptual training yields rapid improvements in visually impaired youth

    PubMed Central

    Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje

    2016-01-01

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026

  1. Maximizing Impact: Pairing interactive web visualizations with traditional print media

    NASA Astrophysics Data System (ADS)

    Read, E. K.; Appling, A.; Carr, L.; De Cicco, L.; Read, J. S.; Walker, J. I.; Winslow, L. A.

    2016-12-01

    Our Nation's rapidly growing store of environmental data makes new demands on researchers: to take on increasingly broad-scale, societally relevant analyses and to rapidly communicate findings to the public. Interactive web-based data visualizations now commonly supplement or comprise journalism, and science journalism has followed suit. To maximize the impact of US Geological Survey (USGS) science, the USGS Office of Water Information Data Science team builds tools and products that combine traditional static research products (e.g., print journal articles) with web-based, interactive data visualizations that target non-scientific audiences. We developed a lightweight, open-source framework for web visualizations to reduce time to production. The framework provides templates for a data visualization workflow and the packaging of text, interactive figures, and images into an appealing web interface with standardized look and feel, usage tracking, and responsiveness. By partnering with subject matter experts to focus on timely, societally relevant issues, we use these tools to produce appealing visual stories targeting specific audiences, including managers, the general public, and scientists, on diverse topics including drought, microplastic pollution, and fisheries response to climate change. We will describe the collaborative and technical methodologies used; describe some examples of how it's worked; and challenges and opportunities for the future.

  2. Supporting Clinical Cognition: A Human-Centered Approach to a Novel ICU Information Visualization Dashboard.

    PubMed

    Faiola, Anthony; Srinivas, Preethi; Duke, Jon

    2015-01-01

    Advances in intensive care unit bedside displays/interfaces and electronic medical record (EMR) technology have not adequately addressed the topic of visual clarity of patient data/information to further reduce cognitive load during clinical decision-making. We responded to these challenges with a human-centered approach to designing and testing a decision-support tool: MIVA 2.0 (Medical Information Visualization Assistant, v.2). Envisioned as an EMR visualization dashboard to support rapid analysis of real-time clinical data-trends, our primary goal originated from a clinical requirement to reduce cognitive overload. In the study, a convenience sample of 12 participants were recruited, in which quantitative and qualitative measures were used to compare MIVA 2.0 with ICU paper medical-charts, using time-on-task, post-test questionnaires, and interviews. Findings demonstrated a significant difference in speed and accuracy with the use of MIVA 2.0. Qualitative outcomes concurred, with participants acknowledging the potential impact of MIVA 2.0 for reducing cognitive load and enabling more accurate and quicker decision-making.

  3. Behavioral assessment of emotional and motivational appraisal during visual processing of emotional scenes depending on spatial frequencies.

    PubMed

    Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A

    2013-10-01

    Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  5. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  6. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  7. Visual Reliance for Balance Control in Older Adults Persists When Visual Information Is Disrupted by Artificial Feedback Delays

    PubMed Central

    Balasubramaniam, Ramesh

    2014-01-01

    Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576

  8. Perceptual learning increases the strength of the earliest signals in visual cortex.

    PubMed

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  9. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    DTIC Science & Technology

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms

  10. Adaptation to Laterally Displacing Prisms in Anisometropic Amblyopia.

    PubMed

    Sklar, Jaime C; Goltz, Herbert C; Gane, Luke; Wong, Agnes M F

    2015-06-01

    Using visual feedback to modify sensorimotor output in response to changes in the external environment is essential for daily function. Prism adaptation is a well-established experimental paradigm to quantify sensorimotor adaptation; that is, how the sensorimotor system adapts to an optically-altered visuospatial environment. Amblyopia is a neurodevelopmental disorder characterized by spatiotemporal deficits in vision that impacts manual and oculomotor function. This study explored the effects of anisometropic amblyopia on prism adaptation. Eight participants with anisometropic amblyopia and 11 visually-normal adults, all right-handed, were tested. Participants pointed to visual targets and were presented with feedback of hand position near the terminus of limb movement in three blocks: baseline, adaptation, and deadaptation. Adaptation was induced by viewing with binocular 11.4° (20 prism diopter [PD]) left-shifting prisms. All tasks were performed during binocular viewing. Participants with anisometropic amblyopia required significantly more trials (i.e., increased time constant) to adapt to prismatic optical displacement than visually-normal controls. During the rapid error correction phase of adaptation, people with anisometropic amblyopia also exhibited greater variance in motor output than visually-normal controls. Amblyopia impacts on the ability to adapt the sensorimotor system to an optically-displaced visual environment. The increased time constant and greater variance in motor output during the rapid error correction phase of adaptation may indicate deficits in processing of visual information as a result of degraded spatiotemporal vision in amblyopia.

  11. Spatially generalizable representations of facial expressions: Decoding across partial face samples.

    PubMed

    Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W

    2018-04-01

    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  13. Mixing apples with oranges: Visual attention deficits in schizophrenia.

    PubMed

    Caprile, Claudia; Cuevas-Esteban, Jorge; Ochoa, Susana; Usall, Judith; Navarra, Jordi

    2015-09-01

    Patients with schizophrenia usually present cognitive deficits. We investigated possible anomalies at filtering out irrelevant visual information in this psychiatric disorder. Associations between these anomalies and positive and/or negative symptomatology were also addressed. A group of individuals with schizophrenia and a control group of healthy adults performed a Garner task. In Experiment 1, participants had to rapidly classify visual stimuli according to their colour while ignoring their shape. These two perceptual dimensions are reported to be "separable" by visual selective attention. In Experiment 2, participants classified the width of other visual stimuli while trying to ignore their height. These two visual dimensions are considered as being "integral" and cannot be attended separately. While healthy perceivers were, in Experiment 1, able to exclusively respond to colour, an irrelevant variation in shape increased colour-based reaction times (RTs) in the group of patients. In Experiment 2, RTs when classifying width increased in both groups as a consequence of perceiving a variation in the irrelevant dimension (height). However, this interfering effect was larger in the group of schizophrenic patients than in the control group. Further analyses revealed that these alterations in filtering out irrelevant visual information correlated with positive symptoms in PANSS scale. A possible limitation of the study is the relatively small sample. Our findings suggest the presence of attention deficits in filtering out irrelevant visual information in schizophrenia that could be related to positive symptomatology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  15. Review of fluorescence guided surgery visualization and overlay techniques

    PubMed Central

    Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2015-01-01

    In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628

  16. Read-out of emotional information from iconic memory: the longevity of threatening stimuli.

    PubMed

    Kuhbandner, Christof; Spitzer, Bernhard; Pekrun, Reinhard

    2011-05-01

    Previous research has shown that emotional stimuli are more likely than neutral stimuli to be selected by attention, indicating that the processing of emotional information is prioritized. In this study, we examined whether the emotional significance of stimuli influences visual processing already at the level of transient storage of incoming information in iconic memory, before attentional selection takes place. We used a typical iconic memory task in which the delay of a poststimulus cue, indicating which of several visual stimuli has to be reported, was varied. Performance decreased rapidly with increasing cue delay, reflecting the fast decay of information stored in iconic memory. However, although neutral stimulus information and emotional stimulus information were initially equally likely to enter iconic memory, the subsequent decay of the initially stored information was slowed for threatening stimuli, a result indicating that fear-relevant information has prolonged availability for read-out from iconic memory. This finding provides the first evidence that emotional significance already facilitates stimulus processing at the stage of iconic memory.

  17. A New Definition for Ground Control

    NASA Technical Reports Server (NTRS)

    2002-01-01

    LandForm(R) VisualFlight(R) blends the power of a geographic information system with the speed of a flight simulator to transform a user's desktop computer into a "virtual cockpit." The software product, which is fully compatible with all Microsoft(R) Windows(R) operating systems, provides distributed, real-time three-dimensional flight visualization over a host of networks. From a desktop, a user can immediately obtain a cockpit view, a chase-plane view, or an airborne tracker view. A customizable display also allows the user to overlay various flight parameters, including latitude, longitude, altitude, pitch, roll, and heading information. Rapid Imaging Software sought assistance from NASA, and the VisualFlight technology came to fruition under a Phase II SBIR contract with Johnson Space Center in 1998. Three years later, on December 13, 2001, Ken Ham successfully flew NASA's X-38 spacecraft from a remote, ground-based cockpit using LandForm VisualFlight as part of his primary situation awareness display in a flight test at Edwards Air Force Base, California.

  18. Visual Feedback of Tongue Movement for Novel Speech Sound Learning

    PubMed Central

    Katz, William F.; Mehta, Sonya

    2015-01-01

    Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. PMID:26635571

  19. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  20. Neuronal responses to face-like stimuli in the monkey pulvinar.

    PubMed

    Nguyen, Minh Nui; Hori, Etsuro; Matsumoto, Jumpei; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non-matching-to-sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face-like patterns (three dark blobs on a bright oval), eye-like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face-like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face-like patterns during the first 50-ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50-ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus-differentiating neurons were consistently higher during the second 50-ms period than during the first 50-ms period. These results suggest that responsiveness to face-like patterns during the first 50-ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50-ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  1. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms.

    PubMed

    Amir, Nader; Taylor, Charles T; Bomyea, Jessica A; Badour, Christal L

    2009-12-01

    Research suggests that individuals with posttraumatic stress disorder (PTSD) selectively attend to threat-relevant information. However, little is known about how initial detection of threat influences the processing of subsequently encountered stimuli. To address this issue, we used a rapid serial visual presentation paradigm (RSVP; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18, 849-860) to examine temporal allocation of attention to threat-related and neutral stimuli in individuals with PTSD symptoms (PTS), traumatized individuals without PTSD symptoms (TC), and non-anxious controls (NAC). Participants were asked to identify one or two targets in an RSVP stream. Typically processing of the first target decreases accuracy of identifying the second target as a function of the temporal lag between targets. Results revealed that the PTS group was significantly more accurate in detecting a neutral target when it was presented 300 or 500ms after threat-related stimuli compared to when the target followed neutral stimuli. These results suggest that individuals with PTSD may process trauma-relevant information more rapidly and efficiently than benign information.

  3. Spatial updating in area LIP is independent of saccade direction.

    PubMed

    Heiser, Laura M; Colby, Carol L

    2006-05-01

    We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.

  4. Enabling Real-time Water Decision Support Services Using Model as a Service

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  5. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  6. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    NASA Astrophysics Data System (ADS)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  7. Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data

    NASA Astrophysics Data System (ADS)

    Han, X.; Wu, J.

    2018-04-01

    The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.

  8. Adaptation to sensory input tunes visual cortex to criticality

    NASA Astrophysics Data System (ADS)

    Shew, Woodrow L.; Clawson, Wesley P.; Pobst, Jeff; Karimipanah, Yahya; Wright, Nathaniel C.; Wessel, Ralf

    2015-08-01

    A long-standing hypothesis at the interface of physics and neuroscience is that neural networks self-organize to the critical point of a phase transition, thereby optimizing aspects of sensory information processing. This idea is partially supported by strong evidence for critical dynamics observed in the cerebral cortex, but the impact of sensory input on these dynamics is largely unknown. Thus, the foundations of this hypothesis--the self-organization process and how it manifests during strong sensory input--remain unstudied experimentally. Here we show in visual cortex and in a computational model that strong sensory input initially elicits cortical network dynamics that are not critical, but adaptive changes in the network rapidly tune the system to criticality. This conclusion is based on observations of multifaceted scaling laws predicted to occur at criticality. Our findings establish sensory adaptation as a self-organizing mechanism that maintains criticality in visual cortex during sensory information processing.

  9. Research on the Design of Visually Impaired Interactive Accessibility in Large Urban Public Transport System

    NASA Astrophysics Data System (ADS)

    Zhang, Weiru

    2017-12-01

    In medieval times, due to people’s reliance on belief, public space of Christianity came into being. With the rise of secularization, religion gradually turned into private belief, and accordingly public space returned to private space. In the 21st century, due to people’s reliance on intelligent devices, information-interactive public space emerges, and as information interaction is constantly constraining the visually impaired, public space regressed to the exclusive space of limited people[1]. Modernity is marked by technical rationality, but an ensuing basic problem lies in the separation between human action, ethics and public space. When technology fails to overcome obstacles for a particular group, the gap between the burgeoning intelligent phenomena and the increasing ratio of visually impaired is also expanding, ultimately resulting in a growing number of “blind spots” in information-interactive space. Technological innovation not only promotes the development of the information industry, but also promotes the rapid development of the transportation industry. Traffic patterns are diversifying and diverging nowadays, but it’s a fatal blow for people with visually disabilities, Because they still can only experience the most traditional mode of transportation, sometimes even not go out. How to guarantee their interactive accessibility in large urban public transport system right, currently, is a very important research direction.

  10. Conscious Vision Proceeds from Global to Local Content in Goal-Directed Tasks and Spontaneous Vision.

    PubMed

    Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine

    2016-05-11

    The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.

  11. Statistics of natural scenes and cortical color processing.

    PubMed

    Cecchi, Guillermo A; Rao, A Ravishankar; Xiao, Youping; Kaplan, Ehud

    2010-09-01

    We investigate the spatial correlations of orientation and color information in natural images. We find that the correlation of orientation information falls off rapidly with increasing distance, while color information is more highly correlated over longer distances. We show that orientation and color information are statistically independent in natural images and that the spatial correlation of jointly encoded orientation and color information decays faster than that of color alone. Our findings suggest that: (a) orientation and color information should be processed in separate channels and (b) the organization of cortical color and orientation selectivity at low spatial frequencies is a reflection of the cortical adaptation to the statistical structure of the visual world. These findings are in agreement with biological observations, as form and color are thought to be represented by different classes of neurons in the primary visual cortex, and the receptive fields of color-selective neurons are larger than those of orientation-selective neurons. The agreement between our findings and biological observations supports the ecological theory of perception.

  12. Vision can recalibrate the vestibular reafference signal used to re-establish postural equilibrium following a platform perturbation.

    PubMed

    Toth, Adam J; Harris, Laurence R; Zettel, John; Bent, Leah R

    2017-02-01

    Visuo-vestibular recalibration, in which visual information is used to alter the interpretation of vestibular signals, has been shown to influence both oculomotor control and navigation. Here we investigate whether vision can recalibrate the vestibular feedback used during the re-establishment of equilibrium following a perturbation. The perturbation recovery responses of nine participants were examined following exposure to a period of 11 s of galvanic vestibular stimulation (GVS). During GVS in VISION trials, occlusion spectacles provided 4 s of visual information that enabled participants to correct for the GVS-induced tilt and associate this asymmetric vestibular signal with a visually provided 'upright'. NoVISION trials had no such visual experience. Participants used the visual information to assist in realigning their posture compared to when visual information was not provided (p < 0.01). The initial recovery response to a platform perturbation was not impacted by whether vision had been provided during the preceding GVS, as determined by peak centre of mass and pressure deviations (p = 0.09). However, after using vision to reinterpret the vestibular signal during GVS, final centre of mass and pressure equilibrium positions were significantly shifted compared to trials in which vision was not available (p < 0.01). These findings support previous work identifying a prominent role of vestibular input for re-establishing postural equilibrium following a perturbation. Our work is the first to highlight the capacity for visual feedback to recalibrate the vertical interpretation of vestibular reafference for re-establishing equilibrium following a perturbation. This demonstrates the rapid adaptability of the vestibular reafference signal for postural control.

  13. Attentional Blink in Young People with High-Functioning Autism and Asperger's Disorder

    ERIC Educational Resources Information Center

    Rinehart, Nicole; Tonge, Bruce; Brereton, Avril; Bradshaw, John

    2010-01-01

    The aim of the study was to examine the temporal characteristics of information processing in individuals with high-functioning autism and Asperger's disorder using a rapid serial visual presentation paradigm. The results clearly showed that such people demonstrate an attentional blink of similar magnitude to comparison groups. This supports the…

  14. Now You See It, Now You Don't: Repetition Blindness for Nonwords

    ERIC Educational Resources Information Center

    Morris, Alison L.; Still, Mary L.

    2008-01-01

    Repetition blindness (RB) for nonwords has been found in some studies, but not in others. The authors propose that the discrepancy in results is fueled by participant strategy; specifically, when rapid serial visual presentation lists are short and participants are explicitly informed that some trials will contain repetitions, participants are…

  15. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  16. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    DOT National Transportation Integrated Search

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  17. Rapid Motion Adaptation Reveals the Temporal Dynamics of Spatiotemporal Correlation between ON and OFF Pathways

    PubMed Central

    Oluk, Can; Pavan, Andrea; Kafaligonul, Hulusi

    2016-01-01

    At the early stages of visual processing, information is processed by two major thalamic pathways encoding brightness increments (ON) and decrements (OFF). Accumulating evidence suggests that these pathways interact and merge as early as in primary visual cortex. Using regular and reverse-phi motion in a rapid adaptation paradigm, we investigated the temporal dynamics of within and across pathway mechanisms for motion processing. When the adaptation duration was short (188 ms), reverse-phi and regular motion led to similar adaptation effects, suggesting that the information from the two pathways are combined efficiently at early-stages of motion processing. However, as the adaption duration was increased to 752 ms, reverse-phi and regular motion showed distinct adaptation effects depending on the test pattern used, either engaging spatiotemporal correlation between the same or opposite contrast polarities. Overall, these findings indicate that spatiotemporal correlation within and across ON-OFF pathways for motion processing can be selectively adapted, and support those models that integrate within and across pathway mechanisms for motion processing. PMID:27667401

  18. Rebalancing Spatial Attention: Endogenous Orienting May Partially Overcome the Left Visual Field Bias in Rapid Serial Visual Presentation.

    PubMed

    Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf

    2017-01-01

    In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.

  19. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  20. The dissociations of visual processing of "hole" and "no-hole" stimuli: An functional magnetic resonance imaging study.

    PubMed

    Meng, Qianli; Huang, Yan; Cui, Ding; He, Lixia; Chen, Lin; Ma, Yuanye; Zhao, Xudong

    2018-05-01

    "Where to begin" is a fundamental question of vision. A "Global-first" topological approach proposed that the first step in object representation was to extract topological properties, especially whether the object had a hole or not. Numerous psychophysical studies found that the hole (closure) could be rapidly recognized by visual system as a primitive property. However, neuroimaging studies showed that the temporal lobe (IT), which lied at a late stage of ventral pathway, was involved as a dedicated region. It appeared paradoxical that IT served as a key region for processing the early component of visual information. Did there exist a distinct fast route to transit hole information to IT? We hypothesized that a fast noncortical pathway might participate in processing holes. To address this issue, a backward masking paradigm combined with functional magnetic resonance imaging (fMRI) was applied to measure neural responses to hole and no-hole stimuli in anatomically defined cortical and subcortical regions of interest (ROIs) under different visual awareness levels by modulating masking delays. For no-hole stimuli, the neural activation of cortical sites was greatly attenuated when the no-hole perception was impaired by strong masking, whereas an enhanced neural response to hole stimuli in non-cortical sites was obtained when the stimulus was rendered more invisible. The results suggested that whereas the cortical route was required to drive a perceptual response for no-hole stimuli, a subcortical route might be involved in coding the hole feature, resulting in a rapid hole perception in primitive vision.

  1. Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models

    PubMed Central

    Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney

    2014-01-01

    Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535

  2. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  3. Spatial Probability Dynamically Modulates Visual Target Detection in Chickens

    PubMed Central

    Sridharan, Devarajan; Ramamurthy, Deepa L.; Knudsen, Eric I.

    2013-01-01

    The natural world contains a rich and ever-changing landscape of sensory information. To survive, an organism must be able to flexibly and rapidly locate the most relevant sources of information at any time. Humans and non-human primates exploit regularities in the spatial distribution of relevant stimuli (targets) to improve detection at locations of high target probability. Is the ability to flexibly modify behavior based on visual experience unique to primates? Chickens (Gallus domesticus) were trained on a multiple alternative Go/NoGo task to detect a small, briefly-flashed dot (target) in each of the quadrants of the visual field. When targets were presented with equal probability (25%) in each quadrant, chickens exhibited a distinct advantage for detecting targets at lower, relative to upper, hemifield locations. Increasing the probability of presentation in the upper hemifield locations (to 80%) dramatically improved detection performance at these locations to be on par with lower hemifield performance. Finally, detection performance in the upper hemifield changed on a rapid timescale, improving with successive target detections, and declining with successive detections at the diagonally opposite location in the lower hemifield. These data indicate the action of a process that in chickens, as in primates, flexibly and dynamically modulates detection performance based on the spatial probabilities of sensory stimuli as well as on recent performance history. PMID:23734188

  4. Visual graph query formulation and exploration: a new perspective on information retrieval at the edge

    NASA Astrophysics Data System (ADS)

    Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng

    2016-05-01

    Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.

  5. Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks

    PubMed Central

    Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi

    2017-01-01

    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998

  6. The visualization of spatial uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, R.M.

    1994-12-31

    Geostatistical conditions simulation is gaining acceptance as a numerical modeling tool in the petroleum industry. Unfortunately, many of the new users of conditional simulation work with only one outcome or ``realization`` and ignore the many other outcomes that could be produced by their conditional simulation tools. 3-D visualization tools allow them to create very realistic images of this single outcome as reality. There are many methods currently available for presenting the uncertainty information from a family of possible outcomes; most of these, however, use static displays and many present uncertainty in a format that is not intuitive. This paper exploresmore » the visualization of uncertainty through dynamic displays that exploit the intuitive link between uncertainty and change by presenting the use with a constantly evolving model. The key technical challenge to such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such animation is possible and to show that such dynamic displays can be an effective tool in risk analysis for the petroleum industry.« less

  7. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  8. Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.

    PubMed

    Kanwisher, N

    1991-05-01

    Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.

  9. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  10. AstroBlend: An astrophysical visualization package for Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2016-04-01

    The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.

  11. Vision and agility training in community dwelling older adults: incorporating visual training into programs for fall prevention.

    PubMed

    Reed-Jones, Rebecca J; Dorgo, Sandor; Hitchings, Maija K; Bader, Julia O

    2012-04-01

    This study aimed to examine the effect of visual training on obstacle course performance of independent community dwelling older adults. Agility is the ability to rapidly alter ongoing motor patterns, an important aspect of mobility which is required in obstacle avoidance. However, visual information is also a critical factor in successful obstacle avoidance. We compared obstacle course performance of a group that trained in visually driven body movements and agility drills, to a group that trained only in agility drills. We also included a control group that followed the American College of Sports Medicine exercise recommendations for older adults. Significant gains in fitness, mobility and power were observed across all training groups. Obstacle course performance results revealed that visual training had the greatest improvement on obstacle course performance (22%) following a 12 week training program. These results suggest that visual training may be an important consideration for fall prevention programs. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Hand Path Priming in Manual Obstacle Avoidance: Rapid Decay of Dorsal Stream Information

    ERIC Educational Resources Information Center

    Jax, Steven A.; Rosenbaum, David A.

    2009-01-01

    The dorsal, action-related, visual stream has been thought to have little or no memory. This hypothesis has seemed credible because functions related to the dorsal stream have been generally unsusceptible to priming from previous experience. Tests of this claim have yielded inconsistent results, however. We argue that these inconsistencies may be…

  13. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing

    PubMed Central

    Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2017-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663

  14. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    PubMed

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.

  15. Rapid and accurate diagnosis of acute cholecystitis with /sup 99m/Tc-HIDA cholescintigraphy. [HIDA = dimethyl acetanilide iminodiacetic acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weissmann, H.S.; Frank, M.S.; Bernstein, L.H.

    1979-04-01

    Technetium-99m dimethyl acetanilide iminodiacetic acid (HIDA) cholescintigraphy was performed on 90 patients with suspected acute cholecytitis. Visualization of the gallbladder established patency of the cystic duct and excluded the diagnosis of acute cholecystitis in 50 of 52 patients. Nonvisualization of the gallbladder with visualization of the common bile duct was diagnostic of acute cholecystitis in 38 patients, all subsequently proven at surgery. The observed accuracy of this procedure is 98% and specificity is 100%. The false negative rate is 5% and false positive rate is zero. Technetium-99m-HILDA has many advantages which make it the procedure of choice in evaluating amore » patient for suspected acute cholecystitis. It is a rapid, simple, safe examination which provides functional as well as anatomic information about the hepatobiliary system in individuals with a serum bilirubin level up to 8 mg/100 ml.« less

  16. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review

    PubMed Central

    Sheridan, Heather; Reingold, Eyal M.

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865

  17. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  18. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  19. Fast Tracking Data to Informed Decisions: An Advanced Information System to Improve Environmental Understanding and Management (Invited)

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Myers, J.; Liu, Y.; Bajcsy, P.

    2010-12-01

    Emerging sensing and information technology are rapidly creating a new paradigm for environmental research and management, in which data from multiple sensors and information sources can guide real-time adaptive observation and decision making. This talk will provide an overview of emerging cyberinfrastructure and three case studies that illustrate their potential: combined sewer overflows in Chicago, hypoxia in Corpus Christi Bay, Texas, and sustainable agriculture in Illinois. An advanced information system for real-time decision making and visual geospatial analytics will be presented as an example of cyberinfrastructure that enables easier implementation of numerous real-time applications.

  20. A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.

    PubMed

    Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T

    2015-09-01

    We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.

  1. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  2. Rapid and visual detection of the main chemical compositions in maize seeds based on Raman hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Yang, Guiyan; Wang, Qingyan; Liu, Chen; Wang, Xiaobin; Fan, Shuxiang; Huang, Wenqian

    2018-07-01

    Rapid and visual detection of the chemical compositions of plant seeds is important but difficult for a traditional seed quality analysis system. In this study, a custom-designed line-scan Raman hyperspectral imaging system was applied for detecting and displaying the main chemical compositions in a heterogeneous maize seed. Raman hyperspectral images collected from the endosperm and embryo of maize seed were acquired and preprocessed by Savitzky-Golay (SG) filter and adaptive iteratively reweighted Penalized Least Squares (airPLS). Three varieties of maize seeds were analyzed, and the characteristics of the spectral and spatial information were extracted from each hyperspectral image. The Raman characteristic peaks, identified at 477, 1443, 1522, 1596 and 1654 cm-1 from 380 to 1800 cm-1 Raman spectra, were related to corn starch, mixture of oil and starch, zeaxanthin, lignin and oil in maize seeds, respectively. Each single-band image corresponding to the characteristic band characterized the spatial distribution of the chemical composition in a seed successfully. The embryo was distinguished from the endosperm by band operation of the single-band images at 477, 1443, and 1596 cm-1 for each variety. Results showed that Raman hyperspectral imaging system could be used for on-line quality control of maize seeds based on the rapid and visual detection of the chemical compositions in maize seeds.

  3. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  4. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  5. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  6. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    PubMed

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  7. Combined visual illusion effects on the perceived index of difficulty and movement outcomes in discrete and continuous fitts' tapping.

    PubMed

    Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin

    2016-01-01

    The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.

  8. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  9. Different Attentional Blink Tasks Reflect Distinct Information Processing Limitations: An Individual Differences Approach

    ERIC Educational Resources Information Center

    Kelly, Ashleigh J.; Dux, Paul E.

    2011-01-01

    To study the temporal dynamics and capacity-limits of attentional selection and encoding, researchers often employ the attentional blink (AB) phenomenon: subjects' impaired ability to report the second of two targets in a rapid serial visual presentation (RSVP) stream that appear within 200-500 ms of one another. The AB has now been the subject of…

  10. Visual field tunneling in aviators induced by memory demands.

    PubMed

    Williams, L J

    1995-04-01

    Aviators are required rapidly and accurately to process enormous amounts of visual information located foveally and peripherally. The present study, expanding upon an earlier study (Williams, 1988), required young aviators to process within the framework of a single eye fixation a briefly displayed foveally presented memory load while simultaneously trying to identify common peripheral targets presented on the same display at locations up to 4.5 degrees of visual angle from the fixation point. This task, as well as a character classification task (Williams, 1985, 1988), has been shown to be very difficult for nonaviators: It results in a tendency toward tunnel vision. Limited preliminary measurements of peripheral accuracy suggested that aviators might be less susceptible than nonaviators to this visual tunneling. The present study demonstrated moderate susceptibility to cognitively induced tunneling in aviators when the foveal task was sufficiently difficult and reaction time was the principal dependent measure.

  11. New Systematic Review Methodology for Visual Impairment and Blindness for the 2010 Global Burden of Disease Study

    PubMed Central

    Bourne, Rupert; Price, Holly; Taylor, Hugh; Leasher, Janet; Keeffe, Jill; Glanville, Julie; Sieving, Pamela C; Khairallah, Moncef; Wong, Tien Yin; Zheng, Yingfeng; Mathew, Anu; Katiyar, Suchitra; Mascarenhas, Maya; Stevens, Gretchen A; Resnikoff, Serge; Gichuhi, Stephen; Naidoo, Kovin; Wallace, Diane; Kymes, Steven; Peters, Colleen; Pesudovs, Konrad; Braithwaite, Tasanee; Limburg, Hans

    2014-01-01

    Purpose To describe a systematic review of population-based prevalence studies of visual impairment (VI) and blindness worldwide over the past 32 years that informs the Global Burden of Diseases, Injuries and Risk Factors Study. Methods A systematic review (Stage 1) of medical literature from 1 January 1980 to 31 January 2012 identified indexed articles containing data on incidence, prevalence and causes of blindness and VI. Only cross-sectional population-based representative studies were selected from which to extract data for a database of age- and sex-specific data of prevalence of 4 distance and one near visual loss categories (presenting and best-corrected). Unpublished data and data from studies using ‘rapid assessment’ methodology were later added (Stage 2). Results Stage 1 identified 14,908 references, of which 204 articles met the inclusion criteria. Stage 2 added unpublished data from 44 ‘rapid assessment studies’ and 4 other surveys. This resulted in a final dataset of 252 articles of 243 studies, of which 238 (98%) reported distance vision loss categories. Thirty-seven studies of the final dataset reported prevalence of mild VI and 4 reported near vision impairment. Conclusion We report a comprehensive systematic review of over 30 years of VI/blindness studies. While there has been an increase in population-based studies conducted in the 2000’s compared to previous decades; there is limited information from certain regions (eg. Central Africa and Central and Eastern Europe, and the Caribbean and Latin America), younger age groups and minimal data regarding prevalence of near vision and mild distance visual impairment. PMID:23350553

  12. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style.

    PubMed

    Crewther, David P; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A; Crewther, Sheila G

    2015-12-01

    Saccadic suppression-the reduction of visual sensitivity during rapid eye movements-has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35-45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80-160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement.

  13. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  14. Cerebrospinal fluid leakage in intracranial hypotension syndrome: usefulness of indirect findings in radionuclide cisternography for detection and treatment monitoring.

    PubMed

    Morioka, Tomoaki; Aoki, Takatoshi; Tomoda, Yoshinori; Takahashi, Hiroyuki; Kakeda, Shingo; Takeshita, Iwao; Ohno, Masato; Korogi, Yukunori

    2008-03-01

    To evaluate indirect findings of cerebrospinal fluid (CSF) leakage on radionuclide cisternography and their changes after treatment. This study was approved by the hospital's institutional review board and informed consent was obtained before each examination. A total of 67 patients who were clinically suspected of spontaneous intracranial hypotension (SIH) syndrome underwent radionuclide cisternography, and 27 patients who had direct findings of CSF leakage on radionuclide cisternography were selected for this evaluation. They were 16 males and 11 females, aged between 26 and 58 years. Sequential images of radionuclide cisternography were acquired at 1, 3, 5, and 24 hours after injection. We assessed the presence or absence of 4 indirect findings; early visualization of bladder activity, no visualization of activity over the brain convexities, rapid disappearance of spinal activity, and abnormal visualization of the root sleeves. Changes of the direct and indirect findings after treatment were also evaluated in 14 patients who underwent epidural blood patch treatment. Early visualization of bladder activity was found in all 27 patients. Seven of 27 (25.9%) patients showed no activity over the brain convexities. Rapid disappearance of spinal activity and abnormal root sleeve visualization were present in 2 (7.4%) and 5 (18.5%) patients, respectively. After epidural blood patch, both direct CSF leakage findings and indirect findings of early visualization of bladder activity had disappeared or improved in 12 of 14 patients (85.7%). The other indirect findings also disappeared after treatment in all cases. Indirect findings of radionuclide cisternography, especially early visualization of bladder activity, may be useful in the diagnosis and posttreatment follow-up of CSF leakage.

  15. The Effect of Orthographic Depth on Letter String Processing: The Case of Visual Attention Span and Rapid Automatized Naming

    ERIC Educational Resources Information Center

    Antzaka, Alexia; Martin, Clara; Caffarra, Sendy; Schlöffel, Sophie; Carreiras, Manuel; Lallier, Marie

    2018-01-01

    The present study investigated whether orthographic depth can increase the bias towards multi-letter processing in two reading-related skills: visual attention span (VAS) and rapid automatized naming (RAN). VAS (i.e., the number of visual elements that can be processed at once in a multi-element array) was tested with a visual 1-back task and RAN…

  16. Designing a Culturally Appropriate Visually Enhanced Low-Text Mobile Health App Promoting Physical Activity for Latinos: A Qualitative Study.

    PubMed

    Bender, Melinda S; Martinez, Suzanna; Kennedy, Christine

    2016-07-01

    Rapid proliferation of smartphone ownership and use among Latinos offers a unique opportunity to employ innovative visually enhanced low-text (VELT) mobile health applications (mHealth app) to promote health behavior change for Latinos at risk for lifestyle-related diseases. Using focus groups and in-depth interviews with 16 promotores and 5 health care providers recruited from California clinics, this qualitative study explored perceptions of visuals for a VELT mHealth app promoting physical activity (PA) and limiting sedentary behavior (SB) for Latinos. In this Phase 1 study, participants endorsed visuals portraying PA guidelines and recommended visuals depicting family and socially oriented PA. Overall, participants supported a VELT mHealth app as an alternative to text-based education. Findings will inform the future Phase 2 study development of a culturally appropriate VELT mHealth app to promote PA for Latinos, improve health literacy, and provide an alternative to traditional clinic text-based health education materials. © The Author(s) 2015.

  17. [Modern biology, imagery and forensic medicine: contributions and limitations in examination of skeletal remains].

    PubMed

    Lecomte, Dominique; Plu, Isabelle; Froment, Alain

    2012-06-01

    Forensic examination is often requested when skeletal remains are discovered. Detailed visual observation can provide much information, such as the human or animal origin, sex, age, stature, and ancestry, and approximate time since death. New three-dimensional imaging techniques can provide further information (osteometry, facial reconstruction). Bone chemistry, and particularly measurement of stable or unstable carbon and nitrogen isotopes, yields information on diet and time since death, respectively. Genetic analyses of ancient DNA are also developing rapidly. Although seldom used in a judicial context, these modern anthropologic techniques are nevertheless available for the most complex cases.

  18. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  19. Rapid recalibration of speech perception after experiencing the McGurk illusion.

    PubMed

    Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P

    2018-03-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.

  20. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  1. Visual contribution to the multistable perception of speech.

    PubMed

    Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc

    2007-11-01

    The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.

  2. Differential Visual Processing of Animal Images, with and without Conscious Awareness

    PubMed Central

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106

  3. Differential Visual Processing of Animal Images, with and without Conscious Awareness.

    PubMed

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.

  4. Adding a Visualization Feature to Web Search Engines: It’s Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t helpmore » but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.« less

  5. A Rapid Assessment of Instructional Strategies to Teach Auditory-Visual Conditional Discriminations to Children with Autism

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany

    2013-01-01

    The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…

  6. Rapid Assessment of Visual Impairment in Urban Population of Delhi, India

    PubMed Central

    Gupta, Noopur; Vashist, Praveen; Malhotra, Sumit; Senjam, Suraj Singh; Misra, Vasundhara; Bhardwaj, Amit

    2015-01-01

    Purpose To determine the prevalence, causes and associated demographic factors related to visual impairment amongst the urban population of New Delhi, India. Methods A population-based, cross-sectional study was conducted in East Delhi district using cluster random sampling methodology. This Rapid Assessment of Visual Impairment (RAVI) survey involved examination of all individuals aged 40 years and above in 24 randomly selected clusters of the district. Visual acuity (VA) assessment and comprehensive ocular examination were done during the door-to-door survey. A questionnaire was used to collect personal and demographic information of the study population. Blindness and Visual Impairment was defined as presenting VA <3/60and <6/18 in the better eye, respectively. Descriptive statistics were computed along with multivariable logistic regression analysis to determine associated factors for visual impairment. Results Of 2421 subjects enumerated, 2331 (96.3%) were available for ophthalmic examination. Among those examined, 49.3% were males. The prevalence of visual impairment (VI) in the study population, was 11.4% (95% C.I. 10.1, 12.7) and that of blindness was 1.2% (95% C.I. 0.8, 1.6). Uncorrected refractive error was the leading cause of VI accounting for 53.4% of all VI followed by cataract (33.8%). With multivariable logistic regression, the odds of having VI increased with age (OR= 24.6[95% C.I.: 14.9, 40.7]; p<0.001). Illiterate participants were more likely to have VI [OR= 1.5 (95% C.I.: 1.1,2.1)] when compared to educated participants. Conclusions The first implementation of the RAVI methodology in a North Indian population revealed that the burden of visual impairment is considerable in this region despite availability of adequate eye care facilities. Awareness generation and simple interventions like cataract surgery and provision of spectacles will help to eliminate the major causes of blindness and visual impairment in this region. PMID:25915659

  7. Rapid assessment of visual impairment in urban population of Delhi, India.

    PubMed

    Gupta, Noopur; Vashist, Praveen; Malhotra, Sumit; Senjam, Suraj Singh; Misra, Vasundhara; Bhardwaj, Amit

    2015-01-01

    To determine the prevalence, causes and associated demographic factors related to visual impairment amongst the urban population of New Delhi, India. A population-based, cross-sectional study was conducted in East Delhi district using cluster random sampling methodology. This Rapid Assessment of Visual Impairment (RAVI) survey involved examination of all individuals aged 40 years and above in 24 randomly selected clusters of the district. Visual acuity (VA) assessment and comprehensive ocular examination were done during the door-to-door survey. A questionnaire was used to collect personal and demographic information of the study population. Blindness and Visual Impairment was defined as presenting VA < 3/60 and < 6/18 in the better eye, respectively. Descriptive statistics were computed along with multivariable logistic regression analysis to determine associated factors for visual impairment. Of 2421 subjects enumerated, 2331 (96.3%) were available for ophthalmic examination. Among those examined, 49.3% were males. The prevalence of visual impairment (VI) in the study population, was 11.4% (95% C.I. 10.1, 12.7) and that of blindness was 1.2% (95% C.I. 0.8, 1.6). Uncorrected refractive error was the leading cause of VI accounting for 53.4% of all VI followed by cataract (33.8%). With multivariable logistic regression, the odds of having VI increased with age (OR = 24.6[95% C.I.: 14.9, 40.7]; p < 0.001). Illiterate participants were more likely to have VI [OR = 1.5 (95% C.I.: 1.1,2.1)] when compared to educated participants. The first implementation of the RAVI methodology in a North Indian population revealed that the burden of visual impairment is considerable in this region despite availability of adequate eye care facilities. Awareness generation and simple interventions like cataract surgery and provision of spectacles will help to eliminate the major causes of blindness and visual impairment in this region.

  8. Local motion adaptation enhances the representation of spatial structure at EMD arrays

    PubMed Central

    Lindemann, Jens P.; Egelhaaf, Martin

    2017-01-01

    Neuronal representation and extraction of spatial information are essential for behavioral control. For flying insects, a plausible way to gain spatial information is to exploit distance-dependent optic flow that is generated during translational self-motion. Optic flow is computed by arrays of local motion detectors retinotopically arranged in the second neuropile layer of the insect visual system. These motion detectors have adaptive response characteristics, i.e. their responses to motion with a constant or only slowly changing velocity decrease, while their sensitivity to rapid velocity changes is maintained or even increases. We analyzed by a modeling approach how motion adaptation affects signal representation at the output of arrays of motion detectors during simulated flight in artificial and natural 3D environments. We focused on translational flight, because spatial information is only contained in the optic flow induced by translational locomotion. Indeed, flies, bees and other insects segregate their flight into relatively long intersaccadic translational flight sections interspersed with brief and rapid saccadic turns, presumably to maximize periods of translation (80% of the flight). With a novel adaptive model of the insect visual motion pathway we could show that the motion detector responses to background structures of cluttered environments are largely attenuated as a consequence of motion adaptation, while responses to foreground objects stay constant or even increase. This conclusion even holds under the dynamic flight conditions of insects. PMID:29281631

  9. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  10. Hemispheric differences in visual search of simple line arrays.

    PubMed

    Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W

    1990-01-01

    The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.

  11. Visual Data Exploration and Analysis - Report on the Visualization Breakout Session of the SCaLeS Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Frank, Randy; Fulcomer, Sam

    Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less

  12. Understanding face perception by means of human electrophysiology.

    PubMed

    Rossion, Bruno

    2014-06-01

    Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Compositional Remote Sensing of Icy Planets and Satellites Beyond Jupiter

    NASA Technical Reports Server (NTRS)

    Roush, T. L.

    2002-01-01

    The peak of the solar energy distribution occurs at visual wavelengths and falls off rapidly in the infrared. This fact, improvements in infrared detector technology, and the low surface temperatures for most icy objects in the outer solar system have resulted in the bulk of telescopic and spacecraft observations being performed at visual and near-infrared wavelengths. Such observations, begun in the early 1970's and continuing to present, have provided compositional information regarding the surfaces of the satellites of Saturn and Uranus, Neptune's moon Triton, Pluto, Pluto's moon Charon, Centaur objects, and Kuiper belt objects. Because the incident sunlight penetrates the surface and interacts with the materials present there, the measured reflected sunlight contains information regarding the surface materials, and the ratio of the reflected to incident sunlight provides a mechanism of identifying the materials that are present.

  14. PyContact: Rapid, Customizable, and Visual Analysis of Noncovalent Interactions in MD Simulations.

    PubMed

    Scheurer, Maximilian; Rodenkirch, Peter; Siggel, Marc; Bernardi, Rafael C; Schulten, Klaus; Tajkhorshid, Emad; Rudack, Till

    2018-02-06

    Molecular dynamics (MD) simulations have become ubiquitous in all areas of life sciences. The size and model complexity of MD simulations are rapidly growing along with increasing computing power and improved algorithms. This growth has led to the production of a large amount of simulation data that need to be filtered for relevant information to address specific biomedical and biochemical questions. One of the most relevant molecular properties that can be investigated by all-atom MD simulations is the time-dependent evolution of the complex noncovalent interaction networks governing such fundamental aspects as molecular recognition, binding strength, and mechanical and structural stability. Extracting, evaluating, and visualizing noncovalent interactions is a key task in the daily work of structural biologists. We have developed PyContact, an easy-to-use, highly flexible, and intuitive graphical user interface-based application, designed to provide a toolkit to investigate biomolecular interactions in MD trajectories. PyContact is designed to facilitate this task by enabling identification of relevant noncovalent interactions in a comprehensible manner. The implementation of PyContact as a standalone application enables rapid analysis and data visualization without any additional programming requirements, and also preserves full in-program customization and extension capabilities for advanced users. The statistical analysis representation is interactively combined with full mapping of the results on the molecular system through the synergistic connection between PyContact and VMD. We showcase the capabilities and scientific significance of PyContact by analyzing and visualizing in great detail the noncovalent interactions underlying the ion permeation pathway of the human P2X 3 receptor. As a second application, we examine the protein-protein interaction network of the mechanically ultrastable cohesin-dockering complex. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  16. In the Eye of the Beholder: Rapid Visual Perception of Real-Life Scenes by Young Adults with and without ASD

    ERIC Educational Resources Information Center

    Vanmarcke, Steven; Mullin, Caitlin; Van der Hallen, Ruth; Evers, Kris; Noens, Ilse; Steyaert, Jean; Wagemans, Johan

    2016-01-01

    Typically developing (TD) adults are able to extract global information from natural images and to categorize them within a single glance. This study aimed at extending these findings to individuals with autism spectrum disorder (ASD) using a free description open-encoding paradigm. Participants were asked to freely describe what they saw when…

  17. Visual aspects of perception of multimedia messages on the web through the "eye tracker" method.

    PubMed

    Svilicić, Niksa

    2010-09-01

    Since the dawn of civilisation visual communication played a role in everyday life. In the early times there were simply shaped drawings of animals, pictograms explaining hunting tactics or strategies of attacking the enemies. Through evolution visual expression becomes an important component of communication process on several levels, from the existential and economic level to the artistic level. However, there was always a question of the level of user reception of such visual information in the medium transmitting the information. Does physical positioning of information in the medium contribute to the efficiency of the message? Do the same rules of content positioning apply for traditional (offline) and online media (Internet)? Rapid development of information technology and Internet in almost all segments of contemporary life calls for defining the rules of designing and positioning multimedia online contents on web sites. Recent research indicates beyond doubt that the physical positioning of an online content on a web site significantly determines the quality of user's perception of such content. By employing the "Eye tracking" method it is possible to objectively analyse the level of user perception of a multimedia content on a web site. What is the first thing observed by the user after opening the web site and how does he/she visually search the online content? By which methods can this be investigated subjectively and objectively? How can the survey results be used to improve the creation of web sites and to optimise the positioning of relevant contents on the site? The answers to these questions will significantly improve the presentation of multimedia interactive contents on the Web.

  18. Getting the Gist of Events: Recognition of Two-Participant Actions from Brief Displays

    PubMed Central

    Hafri, Alon; Papafragou, Anna; Trueswell, John C.

    2013-01-01

    Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In three experiments, we displayed naturalistic photographs of a wide range of two-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist, e.g., ‘kicking’, ‘pushing’, etc.) and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in two sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with two-second durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition but, with sufficient input from multiple fixations, people categorically determine the relationship between event participants. PMID:22984951

  19. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  20. Analysis, Mining and Visualization Service at NCSA

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Cox, D.; Welge, M.

    2004-12-01

    NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.

  1. Vision Problems and Reduced Reading Outcomes in Queensland Schoolchildren.

    PubMed

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2017-03-01

    To assess the relationship between vision and reading outcomes in Indigenous and non-Indigenous schoolchildren to determine whether vision problems are associated with lower reading outcomes in these populations. Vision testing and reading assessments were performed on 508 Indigenous and non-Indigenous schoolchildren in Queensland, Australia divided into two age groups: Grades 1 and 2 (6-7 years of age) and Grades 6 and 7 (12-13 years of age). Vision parameters measured included cycloplegic refraction, near point of convergence, heterophoria, fusional vergence range, rapid automatized naming, and visual motor integration. The following vision conditions were then classified based on the vision findings: uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration. Reading accuracy and reading comprehension were measured with the Neale reading test. The effect of uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration on reading accuracy and reading comprehension were investigated with ANCOVAs. The ANCOVAs explained a significant proportion of variance in both reading accuracy and reading comprehension scores in both age groups, with 40% of the variation in reading accuracy and 33% of the variation in reading comprehension explained in the younger age group, and 27% and 10% of the variation in reading accuracy and reading comprehension, respectively, in the older age group. The vision parameters of visual motor integration and rapid automatized naming were significant predictors in all ANCOVAs (P < .01). The direction of the relationship was such that reduced reading results were explained by reduced visual motor integration and rapid automatized naming results. Both reduced rapid automatized naming and visual motor integration were associated with poorer reading outcomes in Indigenous and non-Indigenous children. This is an important finding given the recent emphasis placed on Indigenous children's reading skills and the fact that reduced rapid automatized naming and visual motor integration skills are more common in this group.

  2. Color visual simulation applications at the Defense Mapping Agency

    NASA Astrophysics Data System (ADS)

    Simley, J. D.

    1984-09-01

    The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.

  3. Nonretinotopic visual processing in the brain.

    PubMed

    Melcher, David; Morrone, Maria Concetta

    2015-01-01

    A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

  4. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  5. Saccadic Corollary Discharge Underlies Stable Visual Perception

    PubMed Central

    Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.

    2016-01-01

    Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647

  6. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  7. Rapid and Parallel Adaptive Evolution of the Visual System of Neotropical Midas Cichlid Fishes.

    PubMed

    Torres-Dowdall, Julián; Pierotti, Michele E R; Härer, Andreas; Karagic, Nidal; Woltering, Joost M; Henning, Frederico; Elmer, Kathryn R; Meyer, Axel

    2017-10-01

    Midas cichlid fish are a Central American species flock containing 13 described species that has been dated to only a few thousand years old, a historical timescale infrequently associated with speciation. Their radiation involved the colonization of several clear water crater lakes from two turbid great lakes. Therefore, Midas cichlids have been subjected to widely varying photic conditions during their radiation. Being a primary signal relay for information from the environment to the organism, the visual system is under continuing selective pressure and a prime organ system for accumulating adaptive changes during speciation, particularly in the case of dramatic shifts in photic conditions. Here, we characterize the full visual system of Midas cichlids at organismal and genetic levels, to determine what types of adaptive changes evolved within the short time span of their radiation. We show that Midas cichlids have a diverse visual system with unexpectedly high intra- and interspecific variation in color vision sensitivity and lens transmittance. Midas cichlid populations in the clear crater lakes have convergently evolved visual sensitivities shifted toward shorter wavelengths compared with the ancestral populations from the turbid great lakes. This divergence in sensitivity is driven by changes in chromophore usage, differential opsin expression, opsin coexpression, and to a lesser degree by opsin coding sequence variation. The visual system of Midas cichlids has the evolutionary capacity to rapidly integrate multiple adaptations to changing light environments. Our data may indicate that, in early stages of divergence, changes in opsin regulation could precede changes in opsin coding sequence evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. The first rapid assessment of avoidable blindness (RAAB) in Thailand.

    PubMed

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5-0.8), 1.3% (95% CI: 1.0-1.6), 12.6% (95% CI: 10.8-14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system.

  9. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  10. Endoscopic high-resolution auto fluorescence imaging and optical coherence tomography of airways in vivo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pahlevaninezhad, Hamid; Lee, Anthony; Hohert, Geoffrey; Schwartz, Carley; Shaipanich, Tawimas; Ritchie, Alexander J.; Zhang, Wei; MacAulay, Calum E.; Lam, Stephen; Lane, Pierre M.

    2016-03-01

    In this work, we present multimodal imaging of peripheral airways in vivo using an endoscopic imaging system capable of co-registered optical coherence tomography and autofluorescence imaging (OCT-AFI). This system employs a 0.9 mm diameter double-clad fiber optic-based catheter for endoscopic imaging of small peripheral airways. Optical coherence tomography (OCT) can visualize detailed airway morphology in the lung periphery and autofluorescence imaging (AFI) can visualize fluorescent tissue components such as collagen and elastin, improving the detection of airway lesions. Results from in vivo imaging of 40 patients indicate that OCT and AFI offer complementary information that may increase the ability to identify pulmonary nodules in the lung periphery and improve the safety of biopsy collection by identifying large blood vessels. AFI can rapidly visualize in vivo vascular networks using fast scanning parameters resulting in vascular-sensitive imaging with less breathing/cardiac motion artifacts compared to Doppler OCT imaging. By providing complementary information about structure and function of tissue, OCT-AFI may improve site selection during biopsy collection in the lung periphery.

  11. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  12. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  13. Visualizing unstructured patient data for assessing diagnostic and therapeutic history.

    PubMed

    Deng, Yihan; Denecke, Kerstin

    2014-01-01

    Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, we evaluate an approach to visualize information extracted from clinical documents by means of tag cloud. Tag clouds will be generated using a bag of word approach and by exploiting part of speech tags. For a real word data set comprising radiological reports, pathological reports and surgical operation reports, tag clouds are generated and a questionnaire-based study is conducted as evaluation. Feedback from the physicians shows that the tag cloud visualization is an effective and rapid approach to represent relevant parts of unstructured patient data. To handle the different medical narratives, we have summarized several possible improvements according to the user feedback and evaluation results.

  14. Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders.

    PubMed

    Pazzaglia, Mariella; Galli, Giulia

    2015-01-01

    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies.

  15. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  16. Rapid steroid influences on visually guided sexual behavior in male goldfish

    PubMed Central

    Lord, Louis-David; Bond, Julia; Thompson, Richmond R.

    2013-01-01

    The ability of steroid hormones to rapidly influence cell physiology through nongenomic mechanisms raises the possibility that these molecules may play a role in the dynamic regulation of social behavior, particularly in species in which social stimuli can rapidly influence circulating steroid levels. We therefore tested if testosterone (T), which increases in male goldfish in response to sexual stimuli, can rapidly influence approach responses towards females. Injections of T stimulated approach responses towards the visual cues of females 30–45 min after the injection but did not stimulate approach responses towards stimulus males or affect general activity, indicating that the effect is stimulus-specific and not a secondary consequence of increased arousal. Estradiol produced the same effect 30–45 min and even 10–25 min after administration, and treatment with the aromatase inhibitor fadrozole blocked exogenous T’s behavioral effect, indicating that T’s rapid stimulation of visual approach responses depends on aromatization. We suggest that T surges induced by sexual stimuli, including preovulatory pheromones, rapidly prime males to mate by increasing sensitivity within visual pathways that guide approach responses towards females and/or by increasing the motivation to approach potential mates through actions within traditional limbic circuits. PMID:19751737

  17. Rapid steroid influences on visually guided sexual behavior in male goldfish.

    PubMed

    Lord, Louis-David; Bond, Julia; Thompson, Richmond R

    2009-11-01

    The ability of steroid hormones to rapidly influence cell physiology through nongenomic mechanisms raises the possibility that these molecules may play a role in the dynamic regulation of social behavior, particularly in species in which social stimuli can rapidly influence circulating steroid levels. We therefore tested if testosterone (T), which increases in male goldfish in response to sexual stimuli, can rapidly influence approach responses towards females. Injections of T stimulated approach responses towards the visual cues of females 30-45 min after the injection but did not stimulate approach responses towards stimulus males or affect general activity, indicating that the effect is stimulus-specific and not a secondary consequence of increased arousal. Estradiol produced the same effect 30-45 min and even 10-25 min after administration, and treatment with the aromatase inhibitor fadrozole blocked exogenous T's behavioral effect, indicating that T's rapid stimulation of visual approach responses depends on aromatization. We suggest that T surges induced by sexual stimuli, including preovulatory pheromones, rapidly prime males to mate by increasing sensitivity within visual pathways that guide approach responses towards females and/or by increasing the motivation to approach potential mates through actions within traditional limbic circuits.

  18. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style

    PubMed Central

    Crewther, David P.; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A.; Crewther, Sheila G.

    2015-01-01

    Saccadic suppression—the reduction of visual sensitivity during rapid eye movements—has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35–45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80–160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement. PMID:27019719

  19. MetaSEEk: a content-based metasearch engine for images

    NASA Astrophysics Data System (ADS)

    Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu

    1997-12-01

    Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.

  20. Comparison of visual and automated Deki Reader interpretation of malaria rapid diagnostic tests in rural Tanzanian military health facilities.

    PubMed

    Kalinga, Akili K; Mwanziva, Charles; Chiduo, Sarah; Mswanya, Christopher; Ishengoma, Deus I; Francis, Filbert; Temu, Lucky; Mahikwano, Lucas; Mgata, Saidi; Amoo, George; Anova, Lalaine; Wurrapa, Eyako; Zwingerman, Nora; Ferro, Santiago; Bhat, Geeta; Fine, Ian; Vesely, Brian; Waters, Norman; Kreishman-Deitrick, Mara; Hickman, Mark; Paris, Robert; Kamau, Edwin; Ohrt, Colin; Kavishe, Reginald A

    2018-05-29

    Although microscopy is a standard diagnostic tool for malaria and the gold standard, it is infrequently used because of unavailability of laboratory facilities and the absence of skilled readers in poor resource settings. Malaria rapid diagnostic tests (RDT) are currently used instead of or as an adjunct to microscopy. However, at very low parasitaemia (usually < 100 asexual parasites/µl), the test line on malaria rapid diagnostic tests can be faint and consequently hard to visualize and this may potentially affect the interpretation of the test results. Fio Corporation (Canada), developed an automated RDT reader named Deki Reader™ for automatic analysis and interpretation of rapid diagnostic tests. This study aimed to compare visual assessment and automated Deki Reader evaluations to interpret malaria rapid diagnostic tests against microscopy. Unlike in the previous studies where expert laboratory technicians interpreted the test results visually and operated the device, in this study low cadre health care workers who have not attended any formal professional training in laboratory sciences were employed. Finger prick blood from 1293 outpatients with fever was tested for malaria using RDT and Giemsa-stained microscopy for thick and thin blood smears. Blood samples for RDTs were processed according to manufacturers' instructions automated in the Deki Reader. Results of malaria diagnoses were compared between visual and the automated devise reading of RDT and microscopy. The sensitivity of malaria rapid diagnostic test results interpreted by the Deki Reader was 94.1% and that of visual interpretation was 93.9%. The specificity of malaria rapid diagnostic test results was 71.8% and that of human interpretation was 72.0%. The positive predictive value of malaria RDT results by the Deki Reader and visual interpretation was 75.8 and 75.4%, respectively, while the negative predictive values were 92.8 and 92.4%, respectively. The accuracy of RDT as interpreted by DR and visually was 82.6 and 82.1%, respectively. There was no significant difference in performance of RDTs interpreted by either automated DR or visually by unskilled health workers. However, despite the similarities in performance parameters, the device has proven useful because it provides stepwise guidance on processing RDT, data transfer and reporting.

  1. The use of higher-order statistics in rapid object categorization in natural scenes.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-02-04

    We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.

  2. [Pattern recognition of decorative papers with different visual characteristics using visible spectroscopy coupled with principal component analysis (PCA)].

    PubMed

    Zhang, Mao-mao; Yang, Zhong; Lu, Bin; Liu, Ya-na; Sun, Xue-dong

    2015-02-01

    As one of the most important decorative materials for the modern household products, decorative papers impregnated with melamine not only have better decorative performance, but also could greatly improve the surface properties of materials. However, the appearance quality (such as color-difference evaluation and control) of decorative papers, as an important index for the surface quality of decorative paper, has been a puzzle for manufacturers and consumers. Nowadays, human eye is used to discriminate whether there exist color difference in the factory, which is not only of low efficiency but also prone to bring subjective error. Thus, it is of great significance to find an effective method in order to realize the fast recognition and classification of the decorative papers. In the present study, the visible spectroscopy coupled with principal component analysis (PCA) was used for the pattern recognition of decorative papers with different visual characteristics to investigate the feasibility of visible spectroscopy to rapidly recognize the types of decorative papers. The results showed that the correlation between visible spectroscopy and visual characteristics (L*, a* and b*) was significant, and the correlation coefficients wereup to 0.85 and some was even more than 0. 99, which might suggest that the visible spectroscopy reflected some information about visual characteristics on the surface of decorative papers. When using the visible spectroscopy coupled with PCA to recognize the types of decorative papers, the accuracy reached 94%-100%, which might suggest that the visible spectroscopy was a very potential new method for the rapid, objective and accurate recognition of decorative papers with different visual characteristics.

  3. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  4. Age-Related Changes in Temporal Allocation of Visual Attention: Evidence from the Rapid Serial Visual Presentation (RSVP) Paradigm

    ERIC Educational Resources Information Center

    Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie

    2015-01-01

    The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…

  5. Control of moth flight posture is mediated by wing mechanosensory feedback.

    PubMed

    Dickerson, Bradley H; Aldworth, Zane N; Daniel, Thomas L

    2014-07-01

    Flying insects rapidly stabilize after perturbations using both visual and mechanosensory inputs for active control. Insect halteres are mechanosensory organs that encode inertial forces to aid rapid course correction during flight but serve no aerodynamic role and are specific to two orders of insects (Diptera and Strepsiptera). Aside from the literature on halteres and recent work on the antennae of the hawkmoth Manduca sexta, it is unclear how other flying insects use mechanosensory information to control body dynamics. The mechanosensory structures found on the halteres, campaniform sensilla, are also present on wings, suggesting that the wings can encode information about flight dynamics. We show that the neurons innervating these sensilla on the forewings of M. sexta exhibit spike-timing precision comparable to that seen in previous reports of campaniform sensilla, including haltere neurons. In addition, by attaching magnets to the wings of moths and subjecting these animals to a simulated pitch stimulus via a rotating magnetic field during tethered flight, we elicited the same vertical abdominal flexion reflex these animals exhibit in response to visual or inertial pitch stimuli. Our results indicate that, in addition to their role as actuators during locomotion, insect wings serve as sensors that initiate reflexes that control body dynamics. © 2014. Published by The Company of Biologists Ltd.

  6. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  7. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In order to further the state-of-the-art in computational aerosciences (CAS) technology, researchers must be able to gather and understand existing work in the field. One aspect of this information gathering is studying published work available in scientific journals and conference proceedings. However, current scientific publications are very limited in the type and amount of information that they can disseminate. Information is typically restricted to text, a few images, and a bibliography list. Additional information that might be useful to the researcher, such as additional visual results, referenced papers, and datasets, are not available. New forms of electronic publication, such as the World Wide Web (WWW), limit publication size only by available disk space and data transmission bandwidth, both of which are improving rapidly. The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center is in the process of creating an archive of CAS information on the WWW. This archive will be based on the large amount of information produced by researchers associated with the NAS facility. The archive will contain technical summaries and reports of research performed on NAS supercomputers, visual results (images, animations, visualization system scripts), datasets, and any other supporting meta-information. This information will be available via the WWW through the NAS homepage, located at http://www.nas.nasa.gov/, fully indexed for searching. The main components of the archive are technical summaries and reports, visual results, and datasets. Technical summaries are gathered every year by researchers who have been allotted resources on NAS supercomputers. These summaries, together with supporting visual results and references, are browsable by interested researchers. Referenced papers made available by researchers can be accessed through hypertext links. Technical reports are in-depth accounts of tools and applications research projects performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to those interested in computational sciences. At present, only information that may be distributed internationally is made available via the archive. Studies are underway to determine security requirements and solutions to make additional information available. By providing access to the archive via the WWW, the process of information gathering can be more productive and fruitful due to ease of access and ability to manage many different types of information. As the archive grows, additional resources from outside NAS will be added, providing a dynamic source of research results.

  8. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  9. Attentional blink in young people with high-functioning autism and Asperger's disorder.

    PubMed

    Rinehart, Nicole; Tonge, Bruce; Brereton, Avril; Bradshaw, John

    2010-01-01

    The aim of the study was to examine the temporal characteristics of information processing in individuals with high-functioning autism and Asperger's disorder using a rapid serial visual presentation paradigm. The results clearly showed that such people demonstrate an attentional blink of similar magnitude to comparison groups. This supports the proposition that the social processing difficulties experienced by these individuals are not underpinned by a basic temporal-cognitive processing deficit, which is consistent with Minshew's complex information processing theory. This is the second study to show that automatic inhibitory processes are intact in both autism and Asperger's disorder, which appears to distinguish these disorders from some other frontostriatal disorders. The finding that individuals with autism were generally poorer than the comparison group at detecting black Xs, while being as good in responding to white letters, was accounted for in the context of a potential dual-task processing difficulty or visual search superiority.

  10. Variability in visual working memory ability limits the efficiency of perceptual decision making.

    PubMed

    Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T

    2014-04-02

    The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.

  11. Temporal Processing in the Olfactory System: Can We See a Smell?

    PubMed Central

    Gire, David H.; Restrepo, Diego; Sejnowski, Terrence J.; Greer, Charles; De Carlos, Juan A.; Lopez-Mascaraque, Laura

    2013-01-01

    Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing. PMID:23664611

  12. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    PubMed

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  13. SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.

    PubMed

    Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai

    2017-01-01

    The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.

  14. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation.

    PubMed

    Hopfield, J J

    2008-05-01

    The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.

  15. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Ground-plane influences on size estimation in early visual processing.

    PubMed

    Champion, Rebecca A; Warren, Paul A

    2010-07-21

    Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media.

    PubMed

    Wu, Yingcai; Liu, Shixia; Yan, Kai; Liu, Mengchen; Wu, Fangzhao

    2014-12-01

    It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.

  18. Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets

    PubMed Central

    Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc

    2016-01-01

    Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916

  19. The EpiCanvas infectious disease weather map: an interactive visual exploration of temporal and spatial correlations

    PubMed Central

    Livnat, Yarden; Galli, Nathan; Samore, Matthew H; Gundlapalli, Adi V

    2012-01-01

    Advances in surveillance science have supported public health agencies in tracking and responding to disease outbreaks. Increasingly, epidemiologists have been tasked with interpreting multiple streams of heterogeneous data arising from varied surveillance systems. As a result public health personnel have experienced an overload of plots and charts as information visualization techniques have not kept pace with the rapid expansion in data availability. This study sought to advance the science of public health surveillance data visualization by conceptualizing a visual paradigm that provides an ‘epidemiological canvas’ for detection, monitoring, exploration and discovery of regional infectious disease activity and developing a software prototype of an ‘infectious disease weather map'. Design objectives were elucidated and the conceptual model was developed using cognitive task analysis with public health epidemiologists. The software prototype was pilot tested using retrospective data from a large, regional pediatric hospital, and gastrointestinal and respiratory disease outbreaks were re-created as a proof of concept. PMID:22358039

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, S.D.; Smith, S.; Swank, P.R.

    Visual cell profiles were used to analyze the distribution of atypical bronchial cells in sputum specimens from cigarette-smoking volunteers, cigarette-smoking asbestos workers and cigarette-smoking uranium miners. The preliminary results of these sputum visual cell profile studies have demonstrated distinctive distributions of bronchial cell atypias in progressive patterns of squamous metaplasia, mild, moderate and severe atypias and carcinoma, similar to those the authors have previously reported using cell image analysis techniques to determine an atypia status index (ASI). The information gained from this study will be helpful in further validating this ASI and subsequently achieving the ultimate goal of employing cellmore » image analysis for the rapid and precise identification of premalignant atypias in sputum.« less

  1. Near-infrared intraoperative imaging during resection of an anterior mediastinal soft tissue sarcoma.

    PubMed

    Predina, Jarrod D; Newton, Andrew D; Desphande, Charuhas; Singhal, Sunil

    2018-01-01

    Sarcomas are rare malignancies that are generally treated with multimodal therapy protocols incorporating complete local resection, chemotherapy and radiation. Unfortunately, even with this aggressive approach, local recurrences are common. Near-infrared intraoperative imaging is a novel technology that provides real-time visual feedback that can improve identification of disease during resection. The presented study describes utilization of a near-infrared agent (indocyanine green) during resection of an anterior mediastinal sarcoma. Real-time fluorescent feedback provided visual information that helped the surgeon during tumor localization, margin assessment and dissection from mediastinal structures. This rapidly evolving technology may prove useful in patients with primary sarcomas arising from other locations or with other mediastinal neoplasms.

  2. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  3. Rapid recalibration of speech perception after experiencing the McGurk illusion

    PubMed Central

    Pérez-Bellido, Alexis; de Lange, Floris P.

    2018-01-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743

  4. Developmental plasticity in vision and behavior may help guppies overcome increased turbidity.

    PubMed

    Ehlman, Sean M; Sandkam, Benjamin A; Breden, Felix; Sih, Andrew

    2015-12-01

    Increasing turbidity in streams and rivers near human activity is cause for environmental concern, as the ability of aquatic organisms to use visual information declines. To investigate how some organisms might be able to developmentally compensate for increasing turbidity, we reared guppies (Poecilia reticulata) in either clear or turbid water. We assessed the effects of developmental treatments on adult behavior and aspects of the visual system by testing fish from both developmental treatments in turbid and clear water. We found a strong interactive effect of rearing and assay conditions: fish reared in clear water tended to decrease activity in turbid water, whereas fish reared in turbid water tended to increase activity in turbid water. Guppies from all treatments decreased activity when exposed to a predator. To measure plasticity in the visual system, we quantified treatment differences in opsin gene expression of individuals. We detected a shift from mid-wave-sensitive opsins to long wave-sensitive opsins for guppies reared in turbid water. Since long-wavelength sensitivity is important in motion detection, this shift likely allows guppies to salvage motion-detecting abilities when visual information is obscured in turbid water. Our results demonstrate the importance of developmental plasticity in responses of organisms to rapidly changing environments.

  5. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  6. Metabolic rate and body size are linked with perception of temporal information☆

    PubMed Central

    Healy, Kevin; McNally, Luke; Ruxton, Graeme D.; Cooper, Natalie; Jackson, Andrew L.

    2013-01-01

    Body size and metabolic rate both fundamentally constrain how species interact with their environment, and hence ultimately affect their niche. While many mechanisms leading to these constraints have been explored, their effects on the resolution at which temporal information is perceived have been largely overlooked. The visual system acts as a gateway to the dynamic environment and the relative resolution at which organisms are able to acquire and process visual information is likely to restrict their ability to interact with events around them. As both smaller size and higher metabolic rates should facilitate rapid behavioural responses, we hypothesized that these traits would favour perception of temporal change over finer timescales. Using critical flicker fusion frequency, the lowest frequency of flashing at which a flickering light source is perceived as constant, as a measure of the maximum rate of temporal information processing in the visual system, we carried out a phylogenetic comparative analysis of a wide range of vertebrates that supported this hypothesis. Our results have implications for the evolution of signalling systems and predator–prey interactions, and, combined with the strong influence that both body mass and metabolism have on a species' ecological niche, suggest that time perception may constitute an important and overlooked dimension of niche differentiation. PMID:24109147

  7. Visualizing blood vessel trees in three dimensions: clinical applications

    NASA Astrophysics Data System (ADS)

    Bullitt, Elizabeth; Aylward, Stephen

    2005-04-01

    A connected network of blood vessels surrounds and permeates almost every organ of the human body. The ability to define detailed blood vessel trees enables a variety of clinical applications. This paper discusses four such applications and some of the visualization challenges inherent to each. Guidance of endovascular surgery: 3D vessel trees offer important information unavailable by traditional x-ray projection views. How best to combine the 2- and 3D image information is unknown. Planning/guidance of tumor surgery: During tumor resection it is critical to know which blood vessels can be interrupted safely and which cannot. Providing efficient, clear information to the surgeon together with measures of uncertainty in both segmentation and registration can be a complex problem. Vessel-based registration: Vessel-based registration allows pre-and intraoperative images to be registered rapidly. The approach both provides a potential solution to a difficult clinical dilemma and offers a variety of visualization opportunities. Diagnosis/staging of disease: Almost every disease affects blood vessel morphology. The statistical analysis of vessel shape may thus prove to be an important tool in the noninvasive analysis of disease. A plethora of information is available that must be presented meaningfully to the clinician. As medical image analysis methods increase in sophistication, an increasing amount of useful information of varying types will become available to the clinician. New methods must be developed to present a potentially bewildering amount of complex data to individuals who are often accustomed to viewing only tissue slices or flat projection views.

  8. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  9. Saccadic eye movements impose a natural bottleneck on visual short-term memory.

    PubMed

    Ohl, Sven; Rolfs, Martin

    2017-05-01

    Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory to VSTM. In 4 experiments, we show that saccades, planned and executed after the disappearance of a memory array, markedly bias visual memory performance. First, items that had appeared at the saccade target were more readily remembered than items that had appeared elsewhere, even though the saccade was irrelevant to the memory task (Experiment 1). Second, this influence was strongest for saccades elicited right after the disappearance of the memory array and gradually declined over the course of a second (Experiment 2). Third, the saccade stabilized memory representations: The imposed bias persisted even several seconds after saccade execution (Experiment 3). Finally, the advantage for stimuli congruent with the saccade target occurred even when that stimulus was far less likely to be probed in the memory test than any other stimulus in the array, ruling out a strategic effort of observers to memorize information presented at the saccade target (Experiment 4). Together, these results make a strong case that saccades inadvertently determine the content of VSTM, and highlight the key role of actions for the fundamental building blocks of cognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Colour change of twig-mimicking peppered moth larvae is a continuous reaction norm that increases camouflage against avian predators

    PubMed Central

    Rowland, Hannah M.; Edmonds, Nicola; Saccheri, Ilik J.

    2017-01-01

    Camouflage, and in particular background-matching, is one of the most common anti-predator strategies observed in nature. Animals can improve their match to the colour/pattern of their surroundings through background selection, and/or by plastic colour change. Colour change can occur rapidly (a few seconds), or it may be slow, taking hours to days. Many studies have explored the cues and mechanisms behind rapid colour change, but there is a considerable lack of information about slow colour change in the context of predation: the cues that initiate it, and the range of phenotypes that are produced. Here we show that peppered moth (Biston betularia) larvae respond to colour and luminance of the twigs they rest on, and exhibit a continuous reaction norm of phenotypes. When presented with a heterogeneous environment of mixed twig colours, individual larvae specialise crypsis towards one colour rather than developing an intermediate colour. Flexible colour change in this species has likely evolved in association with wind dispersal and polyphagy, which result in caterpillars settling and feeding in a diverse range of visual environments. This is the first example of visually induced slow colour change in Lepidoptera that has been objectively quantified and measured from the visual perspective of natural predators. PMID:29158965

  11. Colour change of twig-mimicking peppered moth larvae is a continuous reaction norm that increases camouflage against avian predators.

    PubMed

    Eacock, Amy; Rowland, Hannah M; Edmonds, Nicola; Saccheri, Ilik J

    2017-01-01

    Camouflage, and in particular background-matching, is one of the most common anti-predator strategies observed in nature. Animals can improve their match to the colour/pattern of their surroundings through background selection, and/or by plastic colour change. Colour change can occur rapidly (a few seconds), or it may be slow, taking hours to days. Many studies have explored the cues and mechanisms behind rapid colour change, but there is a considerable lack of information about slow colour change in the context of predation: the cues that initiate it, and the range of phenotypes that are produced. Here we show that peppered moth ( Biston betularia ) larvae respond to colour and luminance of the twigs they rest on, and exhibit a continuous reaction norm of phenotypes. When presented with a heterogeneous environment of mixed twig colours, individual larvae specialise crypsis towards one colour rather than developing an intermediate colour. Flexible colour change in this species has likely evolved in association with wind dispersal and polyphagy, which result in caterpillars settling and feeding in a diverse range of visual environments. This is the first example of visually induced slow colour change in Lepidoptera that has been objectively quantified and measured from the visual perspective of natural predators.

  12. Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading.

    PubMed

    Risse, Sarah

    2014-07-15

    The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.

  13. The primate amygdala represents the positive and negative value of visual stimuli during learning

    PubMed Central

    Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel

    2008-01-01

    Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160

  14. Rehabilitation regimes based upon psychophysical studies of prosthetic vision

    NASA Astrophysics Data System (ADS)

    Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.

    2009-06-01

    Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.

  15. The role of flight planning in aircrew decision performance

    NASA Technical Reports Server (NTRS)

    Pepitone, Dave; King, Teresa; Murphy, Miles

    1989-01-01

    The role of flight planning in increasing the safety and decision-making performance of the air transport crews was investigated in a study that involved 48 rated airline crewmembers on a B720 simulator with a model-board-based visual scene and motion cues with three degrees of freedom. The safety performance of the crews was evaluated using videotaped replays of the flight. Based on these evaluations, the crews could be divided into high- and low-safety groups. It was found that, while collecting information before flights, the high-safety crews were more concerned with information about alternative airports, especially the fuel required to get there, and were characterized by making rapid and appropriate decisions during the emergency part of the flight scenario, allowing these crews to make an early diversion to other airports. These results suggest that contingency planning that takes into account alternative courses of action enhances rapid and accurate decision-making under time pressure.

  16. Predictive Feedback and Conscious Visual Experience

    PubMed Central

    Panichello, Matthew F.; Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    The human brain continuously generates predictions about the environment based on learned regularities in the world. These predictions actively and efficiently facilitate the interpretation of incoming sensory information. We review evidence that, as a result of this facilitation, predictions directly influence conscious experience. Specifically, we propose that predictions enable rapid generation of conscious percepts and bias the contents of awareness in situations of uncertainty. The possible neural mechanisms underlying this facilitation are discussed. PMID:23346068

  17. Visual Communication in Web Design - Analyzing Visual Communication in Web Design

    NASA Astrophysics Data System (ADS)

    Thorlacius, Lisbeth

    Web sites are rapidly becoming the preferred media choice for information search, company presentation, shopping, entertainment, education, and social contacts. And along with the various forms of communication that the Web offers the aesthetic aspects have begun to play an increasingly important role. However, studies in the design and the relevance of focusing on the aesthetic aspects in planning and using Web sites have only to a smaller degree been subject of theoretical reflection. For example, Miller (2000), Thorlacius (2001, 2002, 2005), Engholm (2002, 2003), and Beaird (2007) have been contributing to set a beginning agenda that address the aesthetic aspects. On the other hand, there is a considerable amount of literature addressing the theoretical and methodological aspects focusing on the technical and functional aspects. In this context it is the aim of this article to introduce a model for analysis of visual communication on websites.

  18. Aptamer-Based Dual-Functional Probe for Rapid and Specific Counting and Imaging of MCF-7 Cells.

    PubMed

    Yang, Bin; Chen, Beibei; He, Man; Yin, Xiao; Xu, Chi; Hu, Bin

    2018-02-06

    Development of multimodal detection technologies for accurate diagnosis of cancer at early stages is in great demand. In this work, we report a novel approach using an aptamer-based dual-functional probe for rapid, sensitive, and specific counting and visualization of MCF-7 cells by inductively coupled plasma-mass spectrometry (ICP-MS) and fluorescence imaging. The probe consists of a recognition unit of aptamer to catch cancer cells specifically, a fluorescent dye (FAM) moiety for fluorescence resonance energy transfer (FRET)-based "off-on" fluorescence imaging as well as gold nanoparticles (Au NPs) tag for both ICP-MS quantification and fluorescence quenching. Due to the signal amplification effect and low spectral interference of Au NPs in ICP-MS, an excellent linearity and sensitivity were achieved. Accordingly, a limit of detection of 81 MCF-7 cells and a relative standard deviation of 5.6% (800 cells, n = 7) were obtained. The dynamic linear range was 2 × 10 2 to 1.2 × 10 4 cells, and the recoveries in human whole blood were in the range of 98-110%. Overall, the established method provides quantitative and visualized information on MCF-7 cells with a simple and rapid process and paves the way for a promising strategy for biomedical research and clinical diagnostics.

  19. Insensitivity to Fearful Emotion for Early ERP Components in High Autistic Tendency Is Associated with Lower Magnocellular Efficiency.

    PubMed

    Burt, Adelaide; Hugrass, Laila; Frith-Belvedere, Tash; Crewther, David

    2017-01-01

    Low spatial frequency (LSF) visual information is extracted rapidly from fearful faces, suggesting magnocellular involvement. Autistic phenotypes demonstrate altered magnocellular processing, which we propose contributes to a decreased P100 evoked response to LSF fearful faces. Here, we investigated whether rapid processing of fearful facial expressions differs for groups of neurotypical adults with low and high scores on the Autistic Spectrum Quotient (AQ). We created hybrid face stimuli with low and high spatial frequency filtered, fearful, and neutral expressions. Fearful faces produced higher amplitude P100 responses than neutral faces in the low AQ group, particularly when the hybrid face contained a LSF fearful expression. By contrast, there was no effect of fearful expression on P100 amplitude in the high AQ group. Consistent with evidence linking magnocellular differences with autistic personality traits, our non-linear VEP results showed that the high AQ group had higher amplitude K2.1 responses than the low AQ group, which is indicative of less efficient magnocellular recovery. Our results suggest that magnocellular LSF processing of a human face may be the initial visual cue used to rapidly and automatically detect fear, but that this cue functions atypically in those with high autistic tendency.

  20. What you see is what you expect: rapid scene understanding benefits from prior experience.

    PubMed

    Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li

    2015-05-01

    Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.

  1. A rapid method to visualize von willebrand factor multimers by using agarose gel electrophoresis, immunolocalization and luminographic detection.

    PubMed

    Krizek, D R; Rick, M E

    2000-03-15

    A highly sensitive and rapid clinical method for the visualization of the multimeric structure of von Willebrand Factor in plasma and platelets is described. The method utilizes submerged horizontal agarose gel electrophoresis, followed by transfer of the von Willebrand Factor onto a polyvinylidine fluoride membrane, and immunolocalization and luminographic visualization of the von Willebrand Factor multimeric pattern. This method distinguishes type 1 from types 2A and 2B von Willebrand disease, allowing timely evaluation and classification of von Willebrand Factor in patient plasma. It also allows visualization of the unusually high molecular weight multimers present in platelets. There are several major advantages to this method including rapid processing, simplicity of gel preparation, high sensitivity to low concentrations of von Willebrand Factor, and elimination of radioactivity.

  2. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  4. MEMHDX: an interactive tool to expedite the statistical validation and visualization of large HDX-MS datasets.

    PubMed

    Hourdel, Véronique; Volant, Stevenn; O'Brien, Darragh P; Chenal, Alexandre; Chamot-Rooke, Julia; Dillies, Marie-Agnès; Brier, Sébastien

    2016-11-15

    With the continued improvement of requisite mass spectrometers and UHPLC systems, Hydrogen/Deuterium eXchange Mass Spectrometry (HDX-MS) workflows are rapidly evolving towards the investigation of more challenging biological systems, including large protein complexes and membrane proteins. The analysis of such extensive systems results in very large HDX-MS datasets for which specific analysis tools are required to speed up data validation and interpretation. We introduce a web application and a new R-package named 'MEMHDX' to help users analyze, validate and visualize large HDX-MS datasets. MEMHDX is composed of two elements. A statistical tool aids in the validation of the results by applying a mixed-effects model for each peptide, in each experimental condition, and at each time point, taking into account the time dependency of the HDX reaction and number of independent replicates. Two adjusted P-values are generated per peptide, one for the 'Change in dynamics' and one for the 'Magnitude of ΔD', and are used to classify the data by means of a 'Logit' representation. A user-friendly interface developed with Shiny by RStudio facilitates the use of the package. This interactive tool allows the user to easily and rapidly validate, visualize and compare the relative deuterium incorporation on the amino acid sequence and 3D structure, providing both spatial and temporal information. MEMHDX is freely available as a web tool at the project home page http://memhdx.c3bi.pasteur.fr CONTACT: marie-agnes.dillies@pasteur.fr or sebastien.brier@pasteur.frSupplementary information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Food's visually perceived fat content affects discrimination speed in an orthogonal spatial task.

    PubMed

    Harrar, Vanessa; Toepel, Ulrike; Murray, Micah M; Spence, Charles

    2011-10-01

    Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

  6. Visualization of Whole-Night Sleep EEG From 2-Channel Mobile Recording Device Reveals Distinct Deep Sleep Stages with Differential Electrodermal Activity.

    PubMed

    Onton, Julie A; Kang, Dae Y; Coleman, Todd P

    2016-01-01

    Brain activity during sleep is a powerful marker of overall health, but sleep lab testing is prohibitively expensive and only indicated for major sleep disorders. This report demonstrates that mobile 2-channel in-home electroencephalogram (EEG) recording devices provided sufficient information to detect and visualize sleep EEG. Displaying whole-night sleep EEG in a spectral display allowed for quick assessment of general sleep stability, cycle lengths, stage lengths, dominant frequencies and other indices of sleep quality. By visualizing spectral data down to 0.1 Hz, a differentiation emerged between slow-wave sleep with dominant frequency between 0.1-1 Hz or 1-3 Hz, but rarely both. Thus, we present here the new designations, Hi and Lo Deep sleep, according to the frequency range with dominant power. Simultaneously recorded electrodermal activity (EDA) was primarily associated with Lo Deep and very rarely with Hi Deep or any other stage. Therefore, Hi and Lo Deep sleep appear to be physiologically distinct states that may serve unique functions during sleep. We developed an algorithm to classify five stages (Awake, Light, Hi Deep, Lo Deep and rapid eye movement (REM)) using a Hidden Markov Model (HMM), model fitting with the expectation-maximization (EM) algorithm, and estimation of the most likely sleep state sequence by the Viterbi algorithm. The resulting automatically generated sleep hypnogram can help clinicians interpret the spectral display and help researchers computationally quantify sleep stages across participants. In conclusion, this study demonstrates the feasibility of in-home sleep EEG collection, a rapid and informative sleep report format, and novel deep sleep designations accounting for spectral and physiological differences.

  7. A randomized trial of Rapid Rhino Riemann and Telfa nasal packs following endoscopic sinus surgery.

    PubMed

    Cruise, A S; Amonoo-Kuofi, K; Srouji, I; Kanagalingam, J; Georgalas, C; Patel, N N; Badia, L; Lund, V J

    2006-02-01

    To compare Telfa with the Rapid Rhino Riemann nasal pack for use following endoscopic sinus surgery. Prospective, randomized, double-blind, paired trial. Tertiary otolaryngology hospital. Forty-five adult patients undergoing bilateral endoscopic sinus surgery for either chronic rhinosinusitis or nasal polyps. A visual analogue scale was used to assess discomfort caused by the presence of the packs in the nose and by their removal. The amount of bleeding was noted with the packs in place and following their removal. Crusting and adhesions were assessed 2 and 6 weeks following surgery. Both packs performed well giving good haemostasis and causing little bleeding on removal. Both packs caused only mild discomfort while in the nose. On the visual analogue scale of 0-10 cm the mean visual analogue score for Rapid Rhino Riemann pack was 1.7 and for Telfa 2.0 (P = 0.371). The Rapid Rhino Riemann pack caused significantly less pain on removal compared with the Telfa pack with a mean visual analogue score of 2.0 in comparison with 3.7 for Telfa (P = 0.001). There were less adhesions with the Rapid Rhino Riemann than Telfa pack but this was not statistically significant (P = 0.102). Both Telfa and Rapid Rhino Riemann packs can be recommended as packs that control postoperative haemorrhage, do not cause bleeding on removal and cause little discomfort while in the nose. The Rapid Rhino Riemann pack has the advantage of causing significantly less pain on removal.

  8. Application of advanced computing techniques to the analysis and display of space science measurements

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Lapolla, M. V.; Horblit, B.

    1995-01-01

    A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.

  9. Perception of edges and visual texture in the camouflage of the common cuttlefish, Sepia officinalis

    PubMed Central

    Zylinski, S.; Osorio, D.; Shohet, A.J.

    2008-01-01

    The cuttlefish, Sepia officinalis, provides a fascinating opportunity to investigate the mechanisms of camouflage as it rapidly changes its body patterns in response to the visual environment. We investigated how edge information determines camouflage responses through the use of spatially high-pass filtered ‘objects’ and of isolated edges. We then investigated how the body pattern responds to objects defined by texture (second-order information) compared with those defined by luminance. We found that (i) edge information alone is sufficient to elicit the body pattern known as Disruptive, which is the camouflage response given when a whole object is present, and furthermore, isolated edges cause the same response; and (ii) cuttlefish can distinguish and respond to objects of the same mean luminance as the background. These observations emphasize the importance of discrete objects (bounded by edges) in the cuttlefish's choice of camouflage, and more generally imply that figure–ground segregation by cuttlefish is similar to that in vertebrates, as might be predicted by their need to produce effective camouflage against vertebrate predators. PMID:18990667

  10. Situation exploration in a persistent surveillance system with multidimensional data

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.

    2013-03-01

    There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.

  11. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.

    PubMed

    Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D

    2018-06-08

    Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  12. Conscious experience and episodic memory: hippocampus at the crossroads.

    PubMed

    Behrendt, Ralf-Peter

    2013-01-01

    If an instance of conscious experience of the seemingly objective world around us could be regarded as a newly formed event memory, much as an instance of mental imagery has the content of a retrieved event memory, and if, therefore, the stream of conscious experience could be seen as evidence for ongoing formation of event memories that are linked into episodic memory sequences, then unitary conscious experience could be defined as a symbolic representation of the pattern of hippocampal neuronal firing that encodes an event memory - a theoretical stance that may shed light into the mind-body and binding problems in consciousness research. Exceedingly detailed symbols that describe patterns of activity rapidly self-organizing, at each cycle of the θ rhythm, in the hippocampus are instances of unitary conscious experience that jointly constitute the stream of consciousness. Integrating object information (derived from the ventral visual stream and orbitofrontal cortex) with contextual emotional information (from the anterior insula) and spatial environmental information (from the dorsal visual stream), the hippocampus rapidly forms event codes that have the informational content of objects embedded in an emotional and spatiotemporally extending context. Event codes, formed in the CA3-dentate network for the purpose of their memorization, are not only contextualized but also allocentric representations, similarly to conscious experiences of events and objects situated in a seemingly objective and observer-independent framework of phenomenal space and time. Conscious perception, creating the spatially and temporally extending world that we perceive around us, is likely to be evolutionarily related to more fleeting and seemingly internal forms of conscious experience, such as autobiographical memory recall, mental imagery, including goal anticipation, and to other forms of externalized conscious experience, namely dreaming and hallucinations; and evidence pointing to an important contribution of the hippocampus to these conscious phenomena will be reviewed.

  13. Novel flood risk assessment framework for rapid decision making

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Koursari, Eftychia; Solley, Mark

    2016-04-01

    The impacts of catastrophic flooding, have significantly increased over the last few decades. This is due to primarily the increased urbanisation in ever-expanding mega-cities as well as due to the intensification both in magnitude and frequency of extreme hydrologic events. Herein a novel conceptual framework is presented that incorporates the use of real-time information to inform and update low dimensionality hydraulic models, to allow for rapid decision making towards preventing loss of life and safeguarding critical infrastructure. In particular, a case study from the recent UK floods in the area of Whitesands (Dumfries), is presented to demonstrate the utility of this approach. It is demonstrated that effectively combining a wealth of readily available qualitative information (such as crowdsourced visual documentation or using live data from sensing techniques), with existing quantitative data, can help appropriately update hydraulic models and reduce modelling uncertainties in future flood risk assessments. This approach is even more useful in cases where hydraulic models are limited, do not exist or were not needed before unpredicted dynamic modifications to the river system took place (for example in the case of reduced or eliminated hydraulic capacity due to blockages). The low computational cost and rapid assessment this framework offers, render it promising for innovating in flood management.

  14. Stereoscopy in Static Scientific Imagery in an Informal Education Setting: Does It Matter?

    NASA Astrophysics Data System (ADS)

    Price, C. Aaron; Lee, H.-S.; Malatesta, K.

    2014-12-01

    Stereoscopic technology (3D) is rapidly becoming ubiquitous across research, entertainment and informal educational settings. Children of today may grow up never knowing a time when movies, television and video games were not available stereoscopically. Despite this rapid expansion, the field's understanding of the impact of stereoscopic visualizations on learning is rather limited. Much of the excitement of stereoscopic technology could be due to a novelty effect, which will wear off over time. This study controlled for the novelty factor using a variety of techniques. On the floor of an urban science center, 261 children were shown 12 photographs and visualizations of highly spatial scientific objects and scenes. The images were randomly shown in either traditional (2D) format or in stereoscopic format. The children were asked two questions of each image—one about a spatial property of the image and one about a real-world application of that property. At the end of the test, the child was asked to draw from memory the last image they saw. Results showed no overall significant difference in response to the questions associated with 2D or 3D images. However, children who saw the final slide only in 3D drew more complex representations of the slide than those who did not. Results are discussed through the lenses of cognitive load theory and the effect of novelty on engagement.

  15. The Attention Cascade Model and Attentional Blink

    ERIC Educational Resources Information Center

    Shih, Shui-I

    2008-01-01

    An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…

  16. Picture Detection in Rapid Serial Visual Presentation: Features or Identity?

    ERIC Educational Resources Information Center

    Potter, Mary C.; Wyble, Brad; Pandav, Rijuta; Olejarczyk, Jennifer

    2010-01-01

    A pictured object can be readily detected in a rapid serial visual presentation sequence when the target is specified by a superordinate category name such as "animal" or "vehicle". Are category features the initial basis for detection, with identification of the specific object occurring in a second stage (Evans &…

  17. Effects of Mora Deletion, Nonword Repetition, Rapid Naming, and Visual Search Performance on Beginning Reading in Japanese

    ERIC Educational Resources Information Center

    Kobayashi, Maya Shiho; Haynes, Charles W.; Macaruso, Paul; Hook, Pamela E.; Kato, Junko

    2005-01-01

    This study examined the extent to which mora deletion (phonological analysis), nonword repetition (phonological memory), rapid automatized naming (RAN), and visual search abilities predict reading in Japanese kindergartners and first graders. Analogous abilities have been identified as important predictors of reading skills in alphabetic languages…

  18. A global simulation approach to optics, lighting, rendering, and human perception for the improvement of safety in automobiles

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Fournier, Laurent; Menu, Jean-Pierre

    2005-02-01

    In order to provide optimum comfort and safety conditions, information must be seen as clearly as possible by the driver and in all lighting conditions, by day and by night. Therefore, it is becoming fundamental to anticipate in order to predict what the driver will see in a vehicle, in various configurations of scene and observation conditions, so as to optimize the lighting, the ergonomics of the interfaces and the choice of surrounding materials which can be a source of reflection. This information and choices which will depend on it, make it necessary to call upon simulation techniques capable of modeling, globally and simultaneously, the entire light phenomena: surrounding lighting, display technologies, the inside lighting, taking into consideration the multiple reflections caused by the reflection of this light inside the vehicle. This has been the object of an important development, which results in the solution SPEOS Visual Ergonomics, led by company OPTIS. A unique human vision model was developed in collaboration with worldwide specialists in visual perception to transform spectral luminance information into perceived visual information. This model, based on physiological aspects, takes into account the response of the eye to light levels, to color, to contrast, and to ambient lighting, as well as to rapid changes in surrounding luminosity, in accordance with the response of the retina. This unique tool, and information now accessible, enable ergonomists and designers of on board systems to improve the conditions of global visibility, and in so doing the global perception of the environment that the driver will have.

  19. Aging affects the balance between goal-guided and habitual spatial attention.

    PubMed

    Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V

    2017-08-01

    Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.

  20. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    PubMed Central

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  1. Application of Visual Attention in Seismic Attribute Analysis

    NASA Astrophysics Data System (ADS)

    He, M.; Gu, H.; Wang, F.

    2016-12-01

    It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.

  2. Possible functions of contextual modulations and receptive field nonlinearities: pop-out and texture segmentation

    PubMed Central

    Schmid, Anita M.; Victor, Jonathan D.

    2014-01-01

    When analyzing a visual image, the brain has to achieve several goals quickly. One crucial goal is to rapidly detect parts of the visual scene that might be behaviorally relevant, while another one is to segment the image into objects, to enable an internal representation of the world. Both of these processes can be driven by local variations in any of several image attributes such as luminance, color, and texture. Here, focusing on texture defined by local orientation, we propose that the two processes are mediated by separate mechanisms that function in parallel. More specifically, differences in orientation can cause an object to “pop out” and attract visual attention, if its orientation differs from that of the surrounding objects. Differences in orientation can also signal a boundary between objects and therefore provide useful information for image segmentation. We propose that contextual response modulations in primary visual cortex (V1) are responsible for orientation pop-out, while a different kind of receptive field nonlinearity in secondary visual cortex (V2) is responsible for orientation-based texture segmentation. We review a recent experiment that led us to put forward this hypothesis along with other research literature relevant to this notion. PMID:25064441

  3. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  4. A portfolio of products from the rapid terrain visualization interferometric SAR

    NASA Astrophysics Data System (ADS)

    Bickel, Douglas L.; Doerry, Armin W.

    2007-04-01

    The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor was built by Sandia National Laboratories for the Joint Programs Sustainment and Development (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieved better than HRTe Level IV position accuracy in near real-time. The system was flown on a deHavilland DHC-7 Army aircraft. This paper presents a collection of images and data products from the Rapid Terrain Visualization interferometric synthetic aperture radar. The imagery includes orthorectified images and DEMs from the RTV interferometric SAR radar.

  5. The First Rapid Assessment of Avoidable Blindness (RAAB) in Thailand

    PubMed Central

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    Background The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. Methods A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. Results The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5–0.8), 1.3% (95% CI: 1.0–1.6), 12.6% (95% CI: 10.8–14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Conclusion Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system. PMID:25502762

  6. Older drivers and rapid deceleration events: Salisbury Eye Evaluation Driving Study.

    PubMed

    Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K

    2013-09-01

    Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers aged 67-87 were recruited from the Maryland Motor Vehicle Administration's rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created driving monitoring system (DMS) was used to capture rapid deceleration events (RDEs), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving<59 miles during the 5-day period of monitoring. However, older drivers with RDE's were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more "fit", with better measures of vision and cognition compared to those who do not have events of rapid deceleration. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Older Drivers and Rapid Deceleration Events: Salisbury Eye Evaluation Driving Study

    PubMed Central

    Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K

    2012-01-01

    Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers ages 67 to 87 were recruited from the Maryland Motor Vehicle Administration’s rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created Driving Monitor System (DMS) was used to capture rapid deceleration events (RDE), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving <59 miles during the 5-day period of monitoring. However, older drivers with RDE’s were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more “fit”, with better measures of vision and cognition compared to those who do not have events of rapid deceleration. PMID:22742775

  8. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions.

    PubMed

    Kawai, Nobuyuki; He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

  9. Visualizing morphogenesis in transgenic zebrafish embryos using BODIPY TR methyl ester dye as a vital counterstain for GFP.

    PubMed

    Cooper, Mark S; Szeto, Daniel P; Sommers-Herivel, Greg; Topczewski, Jacek; Solnica-Krezel, Lila; Kang, Hee-Chol; Johnson, Iain; Kimelman, David

    2005-02-01

    Green fluorescent protein (GFP) technology is rapidly advancing the study of morphogenesis, by allowing researchers to specifically focus on a subset of labeled cells within the living embryo. However, when imaging GFP-labeled cells using confocal microscopy, it is often essential to simultaneously visualize all of the cells in the embryo using dual-channel fluorescence to provide an embryological context for the cells expressing GFP. Although various counterstains are available, part of their fluorescence overlaps with the GFP emission spectra, making it difficult to clearly identify the cells expressing GFP. In this study, we report that a new fluorophore, BODIPY TR methyl ester dye, serves as a versatile vital counterstain for visualizing the cellular dynamics of morphogenesis within living GFP transgenic zebrafish embryos. The fluorescence of this photostable synthetic dye is spectrally separate from GFP fluorescence, allowing dual-channel, three-dimensional (3D) and four-dimensional (4D) confocal image data sets of living specimens to be easily acquired. These image data sets can be rendered subsequently into uniquely informative 3D and 4D visualizations using computer-assisted visualization software. We discuss a variety of immediate and potential applications of BODIPY TR methyl ester dye as a vital visualization counterstain for GFP in transgenic zebrafish embryos. Copyright 2004 Wiley-Liss, Inc.

  10. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions

    PubMed Central

    He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686

  11. Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data

    PubMed Central

    Bergeron, Bryan P.; Greenes, Robert A.

    1987-01-01

    Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.

  12. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  13. Mapping language to visual referents: Does the degree of image realism matter?

    PubMed

    Saryazdi, Raheleh; Chambers, Craig G

    2018-01-01

    Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  14. Virtual Observatories for Space Physics Observations and Simulations: New Routes to Efficient Access and Visualization

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2005-01-01

    New tools for data access and visualization promise to make the analysis of space plasma data both more efficient and more powerful, especially for answering questions about the global structure and dynamics of the Sun-Earth system. We will show how new existing tools (particularly the Virtual Space Physics Observatory-VSPO-and the Visual System for Browsing, Analysis and Retrieval of Data-ViSBARD; look for the acronyms in Google) already provide rapid access to such information as spacecraft orbits, browse plots, and detailed data, as well as visualizations that can quickly unite our view of multispacecraft observations. We will show movies illustrating multispacecraft observations of the solar wind and magnetosphere during a magnetic storm, and of simulations of 3 0-spacecraft observations derived from MHD simulations of the magnetosphere sampled along likely trajectories of the spacecraft for the MagCon mission. An important issue remaining to be solved is how best to integrate simulation data and services into the Virtual Observatory environment, and this talk will hopefully stimulate further discussion along these lines.

  15. Use of standardized visual assessments of riparian and stream condition to manage riparian bird habitat in eastern Oregon.

    PubMed

    Cooke, Hilary A; Zack, Steve

    2009-07-01

    The importance of riparian vegetation to support stream function and provide riparian bird habitat in semiarid landscapes suggests that standardized assessment tools that include vegetation criteria to evaluate stream health could also be used to assess habitat conditions for riparian-dependent birds. We first evaluated the ability of two visual assessments of woody vegetation in the riparian zone (corridor width and height) to describe variation in the obligate riparian bird ensemble along 19 streams in eastern Oregon. Overall species richness and the abundances of three species all correlated significantly with both, but width was more important than height. We then examined the utility of the riparian zone criteria in three standardized and commonly used rapid visual riparian assessment protocols--the USDI BLM Proper Functioning Condition (PFC) assessment, the USDA NRCS Stream Visual Assessment Protocol (SVAP), and the U.S. EPA Habitat Assessment Field Data Sheet (HAFDS)--to assess potential riparian bird habitat. Based on the degree of correlation of bird species richness with assessment ratings, we found that PFC does not assess obligate riparian bird habitat condition, SVAP provides a coarse estimate, and HAFDS provides the best assessment. We recommend quantitative measures of woody vegetation for all assessments and that all protocols incorporate woody vegetation height. Given that rapid assessments may be the only source of information for thousands of kilometers of streams in the western United States, incorporating simple vegetation measurements is a critical step in evaluating the status of riparian bird habitat and provides a tool for tracking changes in vegetation condition resulting from management decisions.

  16. A disease state fingerprint for evaluation of Alzheimer's disease.

    PubMed

    Mattila, Jussi; Koikkalainen, Juha; Virkki, Arho; Simonsen, Anja; van Gils, Mark; Waldemar, Gunhild; Soininen, Hilkka; Lötjönen, Jyrki

    2011-01-01

    Diagnostic processes of Alzheimer's disease (AD) are evolving. Knowledge about disease-specific biomarkers is constantly increasing and larger volumes of data are being measured from patients. To gain additional benefits from the collected data, a novel statistical modeling and data visualization system is proposed for supporting clinical diagnosis of AD. The proposed system computes an evidence-based estimate of a patient's AD state by comparing his or her heterogeneous neuropsychological, clinical, and biomarker data to previously diagnosed cases. The AD state in this context denotes a patient's degree of similarity to previously diagnosed disease population. A summary of patient data and results of the computation are displayed in a succinct Disease State Fingerprint (DSF) visualization. The visualization clearly discloses how patient data contributes to the AD state, facilitating rapid interpretation of the information. To model the AD state from complex and heterogeneous patient data, a statistical Disease State Index (DSI) method underlying the DSF has been developed. Using baseline data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), the ability of the DSI to model disease progression from elderly healthy controls to AD and its ability to predict conversion from mild cognitive impairment (MCI) to AD were assessed. It was found that the DSI provides well-behaving AD state estimates, corresponding well with the actual diagnoses. For predicting conversion from MCI to AD, the DSI attains performance similar to state-of-the-art reference classifiers. The results suggest that the DSF establishes an effective decision support and data visualization framework for improving AD diagnostics, allowing clinicians to rapidly analyze large quantities of diverse patient data.

  17. Web-Based Visual Analytics for Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Bruce, Joseph R.; Dowson, Scott T.

    Social media provides a rich source of data that reflects current trends and public opinion on a multitude of topics. The data can be harvested from Twitter, Facebook, Blogs, and other social applications. The high rate of adoption of social media has created a domain that has an ever expanding volume of data that make it difficult to use the raw data for analysis. Information visual analytics is key in drawing out features of interest in social media. The Scalable Reasoning System is an application that couples a back end server performing analysis algorithms and an intuitive front end visualizationmore » to allow for investigation. We provide a componentized system that can be rapidly adapted to customer needs such that the information they are most interested in is brought to their attention through the application. To this end, we have developed a social media application for use by emergency operations for the city of Seattle to show current weather and traffic trends which is important for their tasks.« less

  18. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    PubMed

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  19. Development of water environment information management and water pollution accident response system

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Ruan, H.

    2009-12-01

    In recent years, many water pollution accidents occurred with the rapid economical development. In this study, water environment information management and water pollution accident response system are developed based on geographic information system (GIS) techniques. The system integrated spatial database, attribute database, hydraulic model, and water quality model under a user-friendly interface in a GIS environment. System ran in both Client/Server (C/S) and Browser/Server (B/S) platform which focused on model and inquiry respectively. System provided spatial and attribute data inquiry, water quality evaluation, statics, water pollution accident response case management (opening reservoir etc) and 2D and 3D visualization function, and gave assistant information to make decision on water pollution accident response. Polluted plume in Huaihe River were selected to simulate the transport of pollutes.

  20. Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Busler, Jessica N.; Lazarte, Alejandro A.

    2017-01-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…

  1. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  2. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  3. What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight.

    PubMed

    Martin, Graham R

    2017-01-01

    Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet) position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet) accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects) but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid natural selection based upon individual differences in the structure of the optical system, retinal topography, and eye position in the skull. From a sensory ecology perspective a bird is best characterized as "a bill guided by an eye" and that control of flight is achieved within constraints on visual capacity dictated primarily by the demands of foraging and bill control.

  4. What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight

    PubMed Central

    Martin, Graham R.

    2017-01-01

    Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet) position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet) accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects) but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid natural selection based upon individual differences in the structure of the optical system, retinal topography, and eye position in the skull. From a sensory ecology perspective a bird is best characterized as “a bill guided by an eye” and that control of flight is achieved within constraints on visual capacity dictated primarily by the demands of foraging and bill control. PMID:29163020

  5. Visual Working Memory Load-Related Changes in Neural Activity and Functional Connectivity

    PubMed Central

    Li, Ling; Zhang, Jin-Xiang; Jiang, Tao

    2011-01-01

    Background Visual working memory (VWM) helps us store visual information to prepare for subsequent behavior. The neuronal mechanisms for sustaining coherent visual information and the mechanisms for limited VWM capacity have remained uncharacterized. Although numerous studies have utilized behavioral accuracy, neural activity, and connectivity to explore the mechanism of VWM retention, little is known about the load-related changes in functional connectivity for hemi-field VWM retention. Methodology/Principal Findings In this study, we recorded electroencephalography (EEG) from 14 normal young adults while they performed a bilateral visual field memory task. Subjects had more rapid and accurate responses to the left visual field (LVF) memory condition. The difference in mean amplitude between the ipsilateral and contralateral event-related potential (ERP) at parietal-occipital electrodes in retention interval period was obtained with six different memory loads. Functional connectivity between 128 scalp regions was measured by EEG phase synchronization in the theta- (4–8 Hz), alpha- (8–12 Hz), beta- (12–32 Hz), and gamma- (32–40 Hz) frequency bands. The resulting matrices were converted to graphs, and mean degree, clustering coefficient and shortest path length was computed as a function of memory load. The results showed that brain networks of theta-, alpha-, beta-, and gamma- frequency bands were load-dependent and visual-field dependent. The networks of theta- and alpha- bands phase synchrony were most predominant in retention period for right visual field (RVF) WM than for LVF WM. Furthermore, only for RVF memory condition, brain network density of theta-band during the retention interval were linked to the delay of behavior reaction time, and the topological property of alpha-band network was negative correlation with behavior accuracy. Conclusions/Significance We suggest that the differences in theta- and alpha- bands between LVF and RVF conditions in functional connectivity and topological properties during retention period may result in the decline of behavioral performance in RVF task. PMID:21789253

  6. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  7. Encapsulated social perception of emotional expressions.

    PubMed

    Smortchkova, Joulia

    2017-01-01

    In this paper I argue that the detection of emotional expressions is, in its early stages, informationally encapsulated. I clarify and defend such a view via the appeal to data from social perception on the visual processing of faces, bodies, facial and bodily expressions. Encapsulated social perception might exist alongside processes that are cognitively penetrated, and that have to do with recognition and categorization, and play a central evolutionary function in preparing early and rapid responses to the emotional stimuli. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Science information systems: Archive, access, and retrieval

    NASA Technical Reports Server (NTRS)

    Campbell, William J.

    1991-01-01

    The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.

  9. Children inhibit global information when the forest is dense and local information when the forest is sparse.

    PubMed

    Krakowski, Claire-Sara; Borst, Grégoire; Vidal, Julie; Houdé, Olivier; Poirel, Nicolas

    2018-09-01

    Visual environments are composed of global shapes and local details that compete for attentional resources. In adults, the global level is processed more rapidly than the local level, and global information must be inhibited in order to process local information when the local information and global information are in conflict. Compared with adults, children present less of a bias toward global visual information and appear to be more sensitive to the density of local elements that constitute the global level. The current study aimed, for the first time, to investigate the key role of inhibition during global/local processing in children. By including two different conditions of global saliency during a negative priming procedure, the results showed that when the global level was salient (dense hierarchical figures), 7-year-old children and adults needed to inhibit the global level to process the local information. However, when the global level was less salient (sparse hierarchical figures), only children needed to inhibit the local level to process the global information. These results confirm a weaker global bias and the greater impact of saliency in children than in adults. Moreover, the results indicate that, regardless of age, inhibition of the most salient hierarchical level is systematically required to select the less salient but more relevant level. These findings have important implications for future research in this area. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. Evidence flowers: An innovative, visual method of presenting "best evidence" summaries to health professional and lay audiences.

    PubMed

    Babatunde, O O; Tan, V; Jordan, J L; Dziedzic, K; Chew-Graham, C A; Jinks, C; Protheroe, J; van der Windt, D A

    2018-06-01

    Barriers to dissemination and engagement with evidence pose a threat to implementing evidence-based medicine. Understanding, retention, and recall can be enhanced by visual presentation of information. The aim of this exploratory research was to develop and evaluate the accessibility and acceptability of visual summaries for presenting evidence syntheses with multiple exposures or outcomes to professional and lay audiences. "Evidence flowers" were developed as a visual method of presenting data from 4 case scenarios: 2 complex evidence syntheses with multiple outcomes, Cochrane reviews, and clinical guidelines. Petals of evidence flowers were coloured according to the GRADE evidence rating system to display key findings and recommendations from the evidence summaries. Application of evidence flowers was observed during stakeholder workshops. Evaluation and feedback were conducted via questionnaires and informal interviews. Feedback from stakeholders on the evidence flowers collected from workshops, questionnaires, and interviews was encouraging and helpful for refining the design of the flowers. Comments were made on the content and design of the flowers, as well as the usability and potential for displaying different types of evidence. Evidence flowers are a novel and visually stimulating method for presenting research evidence from evidence syntheses with multiple exposures or outcomes, Cochrane reviews, and clinical guidelines. To promote access and engagement with research evidence, evidence flowers may be used in conjunction with other evidence synthesis products, such as (lay) summaries, evidence inventories, rapid reviews, and clinical guidelines. Additional research on potential adaptations and applications of the evidence flowers may further bridge the gap between research evidence and clinical practice. Copyright © 2018 John Wiley & Sons, Ltd.

  12. A rapid approach for characterization of thiol-conjugated antibody-drug conjugates and calculation of drug-antibody ratio by liquid chromatography mass spectrometry.

    PubMed

    Firth, David; Bell, Leonard; Squires, Martin; Estdale, Sian; McKee, Colin

    2015-09-15

    We present the demonstration of a rapid "middle-up" liquid chromatography mass spectrometry (LC-MS)-based workflow for use in the characterization of thiol-conjugated maleimidocaproyl-monomethyl auristatin F (mcMMAF) and valine-citrulline-monomethyl auristatin E (vcMMAE) antibody-drug conjugates. Deconvoluted spectra were generated following a combination of deglycosylation, IdeS (immunoglobulin-degrading enzyme from Streptococcus pyogenes) digestion, and reduction steps that provide a visual representation of the product for rapid lot-to-lot comparison-a means to quickly assess the integrity of the antibody structure and the applied conjugation chemistry by mass. The relative abundance of the detected ions also offer information regarding differences in drug conjugation levels between samples, and the average drug-antibody ratio can be calculated. The approach requires little material (<100 μg) and, thus, is amenable to small-scale process development testing or as an early component of a complete characterization project facilitating informed decision making regarding which aspects of a molecule might need to be examined in more detail by orthogonal methodologies. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Chameleons communicate with complex colour changes during contests: different body regions convey different information.

    PubMed

    Ligon, Russell A; McGraw, Kevin J

    2013-01-01

    Many animals display static coloration (e.g. of feathers or fur) that can serve as a reliable sexual or social signal, but the communication function of rapidly changing colours (as in chameleons and cephalopods) is poorly understood. We used recently developed photographic and mathematical modelling tools to examine how rapid colour changes of veiled chameleons Chamaeleo calyptratus predict aggressive behaviour during male-male competitions. Males that achieved brighter stripe coloration were more likely to approach their opponent, and those that attained brighter head coloration were more likely to win fights; speed of head colour change was also an important predictor of contest outcome. This correlative study represents the first quantification of rapid colour change using organism-specific visual models and provides evidence that the rate of colour change, in addition to maximum display coloration, can be an important component of communication. Interestingly, the body and head locations of the relevant colour signals map onto the behavioural displays given during specific contest stages, with lateral displays from a distance followed by directed, head-on approaches prior to combat, suggesting that different colour change signals may evolve to communicate different information (motivation and fighting ability, respectively).

  14. Chameleons communicate with complex colour changes during contests: different body regions convey different information

    PubMed Central

    Ligon, Russell A.; McGraw, Kevin J.

    2013-01-01

    Many animals display static coloration (e.g. of feathers or fur) that can serve as a reliable sexual or social signal, but the communication function of rapidly changing colours (as in chameleons and cephalopods) is poorly understood. We used recently developed photographic and mathematical modelling tools to examine how rapid colour changes of veiled chameleons Chamaeleo calyptratus predict aggressive behaviour during male–male competitions. Males that achieved brighter stripe coloration were more likely to approach their opponent, and those that attained brighter head coloration were more likely to win fights; speed of head colour change was also an important predictor of contest outcome. This correlative study represents the first quantification of rapid colour change using organism-specific visual models and provides evidence that the rate of colour change, in addition to maximum display coloration, can be an important component of communication. Interestingly, the body and head locations of the relevant colour signals map onto the behavioural displays given during specific contest stages, with lateral displays from a distance followed by directed, head-on approaches prior to combat, suggesting that different colour change signals may evolve to communicate different information (motivation and fighting ability, respectively). PMID:24335271

  15. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  16. Exploration of spatio-temporal patterns of students' movement in field trip by visualizing the log data

    NASA Astrophysics Data System (ADS)

    Cho, Nahye; Kang, Youngok

    2018-05-01

    A numerous log data in addition to user input data are being generated as mobile and web users continue to increase recently, and the studies in order to explore the patterns and meanings of various movement activities by making use of these log data are also rising rapidly. On the other hand, in the field of education, people have recognized the importance of field trip as the creative education is highlighted. Also, the examples which utilize the mobile devices in the field trip in accordance to the development of information technology are growing. In this study, we try to explore the patterns of student's activity by visualizing the log data generated from high school students' field trip with mobile device.

  17. Beyond Phonology: Visual Processes Predict Alphanumeric and Nonalphanumeric Rapid Naming in Poor Early Readers

    ERIC Educational Resources Information Center

    Kruk, Richard S.; Luther Ruban, Cassia

    2018-01-01

    Visual processes in Grade 1 were examined for their predictive influences in nonalphanumeric and alphanumeric rapid naming (RAN) in 51 poor early and 69 typical readers. In a lagged design, children were followed longitudinally from Grade 1 to Grade 3 over 5 testing occasions. RAN outcomes in early Grade 2 were predicted by speeded and nonspeeded…

  18. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  19. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet

    PubMed Central

    Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru

    2017-01-01

    Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection. PMID:28690568

  20. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet.

    PubMed

    Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru

    2017-01-01

    Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection.

  1. Fast Detector/First Responder: Interactions between the Superior Colliculus-Pulvinar Pathway and Stimuli Relevant to Primates

    PubMed Central

    Soares, Sandra C.; Maior, Rafael S.; Isbell, Lynne A.; Tomaz, Carlos; Nishijo, Hisao

    2017-01-01

    Primates are distinguished from other mammals by their heavy reliance on the visual sense, which occurred as a result of natural selection continually favoring those individuals whose visual systems were more responsive to challenges in the natural world. Here we describe two independent but also interrelated visual systems, one cortical and the other subcortical, both of which have been modified and expanded in primates for different functions. Available evidence suggests that while the cortical visual system mainly functions to give primates the ability to assess and adjust to fluid social and ecological environments, the subcortical visual system appears to function as a rapid detector and first responder when time is of the essence, i.e., when survival requires very quick action. We focus here on the subcortical visual system with a review of behavioral and neurophysiological evidence that demonstrates its sensitivity to particular, often emotionally charged, ecological and social stimuli, i.e., snakes and fearful and aggressive facial expressions in conspecifics. We also review the literature on subcortical involvement during another, less emotional, situation that requires rapid detection and response—visually guided reaching and grasping during locomotion—to further emphasize our argument that the subcortical visual system evolved as a rapid detector/first responder, a function that remains in place today. Finally, we argue that investigating deficits in this subcortical system may provide greater understanding of Parkinson's disease and Autism Spectrum disorders (ASD). PMID:28261046

  2. Software Prototyping: A Case Report of Refining User Requirements for a Health Information Exchange Dashboard.

    PubMed

    Nelson, Scott D; Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R

    2016-01-01

    Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system.

  3. LlamaTags: A Versatile Tool to Image Transcription Factor Dynamics in Live Embryos.

    PubMed

    Bothma, Jacques P; Norstad, Matthew R; Alamos, Simon; Garcia, Hernan G

    2018-06-14

    Embryonic cell fates are defined by transcription factors that are rapidly deployed, yet attempts to visualize these factors in vivo often fail because of slow fluorescent protein maturation. Here, we pioneer a protein tag, LlamaTag, which circumvents this maturation limit by binding mature fluorescent proteins, making it possible to visualize transcription factor concentration dynamics in live embryos. Implementing this approach in the fruit fly Drosophila melanogaster, we discovered stochastic bursts in the concentration of transcription factors that are correlated with bursts in transcription. We further used LlamaTags to show that the concentration of protein in a given nucleus heavily depends on transcription of that gene in neighboring nuclei; we speculate that this inter-nuclear signaling is an important mechanism for coordinating gene expression to delineate straight and sharp boundaries of gene expression. Thus, LlamaTags now make it possible to visualize the flow of information along the central dogma in live embryos. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  5. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  6. Big data and visual analytics in anaesthesia and health care.

    PubMed

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Anatomically correct visualization of the human upper airway using a high-speed long range optical coherence tomography system with an integrated positioning sensor

    NASA Astrophysics Data System (ADS)

    Jing, Joseph C.; Chou, Lidek; Su, Erica; Wong, Brian J. F.; Chen, Zhongping

    2016-12-01

    The upper airway is a complex tissue structure that is prone to collapse. Current methods for studying airway obstruction are inadequate in safety, cost, or availability, such as CT or MRI, or only provide localized qualitative information such as flexible endoscopy. Long range optical coherence tomography (OCT) has been used to visualize the human airway in vivo, however the limited imaging range has prevented full delineation of the various shapes and sizes of the lumen. We present a new long range OCT system that integrates high speed imaging with a real-time position tracker to allow for the acquisition of an accurate 3D anatomical structure in vivo. The new system can achieve an imaging range of 30 mm at a frame rate of 200 Hz. The system is capable of generating a rapid and complete visualization and quantification of the airway, which can then be used in computational simulations to determine obstruction sites.

  8. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions.

    PubMed

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-08-04

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.

  9. User-Centered Evaluation of Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean C.

    Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less

  10. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  11. Neural Activity Associated with Visual Search for Line Drawings on AAC Displays: An Exploration of the Use of fMRI.

    PubMed

    Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney

    2015-01-01

    Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.

  12. Biographer: web-based editing and rendering of SBGN compliant biochemical networks.

    PubMed

    Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas

    2013-06-01

    The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-independent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL

  13. Reporting, Visualization, and Modeling of Immunogenicity Data to Assess Its Impact on Pharmacokinetics, Efficacy, and Safety of Monoclonal Antibodies.

    PubMed

    Passey, Chaitali; Suryawanshi, Satyendra; Sanghavi, Kinjal; Gupta, Manish

    2018-02-26

    The rapidly increasing number of therapeutic biologics in development has led to a growing recognition of the need for improvements in immunogenicity assessment. Published data are often inadequate to assess the impact of an antidrug antibody (ADA) on pharmacokinetics, safety, and efficacy, and enable a fully informed decision about patient management in the event of ADA development. The recent introduction of detailed regulatory guidance for industry should help address many past inadequacies in immunogenicity assessment. Nonetheless, careful analysis of gathered data and clear reporting of results are critical to a full understanding of the clinical relevance of ADAs, but have not been widely considered in published literature to date. Here, we review visualization and modeling of immunogenicity data. We present several relatively simple visualization techniques that can provide preliminary information about the kinetics and magnitude of ADA responses, and their impact on pharmacokinetics and clinical endpoints for a given therapeutic protein. We focus on individual sample- and patient-level data, which can be used to build a picture of any trends, thereby guiding analysis of the overall study population. We also discuss methods for modeling ADA data to investigate the impact of immunogenicity on pharmacokinetics, efficacy, and safety.

  14. The use of animation video in teaching to enhance the imagination and visualization of student in engineering drawing

    NASA Astrophysics Data System (ADS)

    Ismail M., E.; Mahazir I., Irwan; Othman, H.; Amiruddin M., H.; Ariffin, A.

    2017-05-01

    The rapid development of information technology today has given a new breath toward usage of computer in education. One of the increasingly popular nowadays is a multimedia technology that merges a variety of media such as text, graphics, animation, video and audio controlled by a computer. With this technology, a wide range of multimedia element can be developed to improve the quality of education. For that reason, this study aims to investigate the use of multimedia element based on animated video that was developed for Engineering Drawing subject according to the syllabus of Vocational College of Malaysia. The design for this study was a survey method using a quantitative approach and involved 30 respondents from Industrial Machining students. The instruments used in study is questionnaire with correlation coefficient value (0.83), calculated on Alpha-Cronbach. Data was collected and analyzed descriptive analyzed using SPSS. The study found that multimedia element for animation video was use significant have capable to increase imagination and visualization of student. The implications of this study provide information of use of multimedia element will student effect imagination and visualization. In general, these findings contribute to the formation of multimedia element of materials appropriate to enhance the quality of learning material for engineering drawing.

  15. Driving with indirect viewing sensors: understanding the visual perception issues

    NASA Astrophysics Data System (ADS)

    O'Kane, Barbara L.

    1996-05-01

    Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.

  16. Children with Autism Detect Targets at Very Rapid Presentation Rates with Similar Accuracy as Adults

    ERIC Educational Resources Information Center

    Hagmann, Carl Erick; Wyble, Bradley; Shea, Nicole; LeBlanc, Megan; Kates, Wendy R.; Russo, Natalie

    2016-01-01

    Enhanced perception may allow for visual search superiority by individuals with Autism Spectrum Disorder (ASD), but does it occur over time? We tested high-functioning children with ASD, typically developing (TD) children, and TD adults in two tasks at three presentation rates (50, 83.3, and 116.7 ms/item) using rapid serial visual presentation.…

  17. Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Potter, Mary C.; Fox, Laura F.

    2009-01-01

    Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…

  18. CLICK: The new USGS center for LIDAR information coordination and knowledge

    USGS Publications Warehouse

    Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.

    2006-01-01

    Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).

  19. Music to knowledge: A visual programming environment for the development and evaluation of music information retrieval techniques

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Downie, J. Stephen

    2005-09-01

    The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.

  20. PRECOG: a tool for automated extraction and visualization of fitness components in microbial growth phenomics.

    PubMed

    Fernandez-Ricaud, Luciano; Kourtchenko, Olga; Zackrisson, Martin; Warringer, Jonas; Blomberg, Anders

    2016-06-23

    Phenomics is a field in functional genomics that records variation in organismal phenotypes in the genetic, epigenetic or environmental context at a massive scale. For microbes, the key phenotype is the growth in population size because it contains information that is directly linked to fitness. Due to technical innovations and extensive automation our capacity to record complex and dynamic microbial growth data is rapidly outpacing our capacity to dissect and visualize this data and extract the fitness components it contains, hampering progress in all fields of microbiology. To automate visualization, analysis and exploration of complex and highly resolved microbial growth data as well as standardized extraction of the fitness components it contains, we developed the software PRECOG (PREsentation and Characterization Of Growth-data). PRECOG allows the user to quality control, interact with and evaluate microbial growth data with ease, speed and accuracy, also in cases of non-standard growth dynamics. Quality indices filter high- from low-quality growth experiments, reducing false positives. The pre-processing filters in PRECOG are computationally inexpensive and yet functionally comparable to more complex neural network procedures. We provide examples where data calibration, project design and feature extraction methodologies have a clear impact on the estimated growth traits, emphasising the need for proper standardization in data analysis. PRECOG is a tool that streamlines growth data pre-processing, phenotypic trait extraction, visualization, distribution and the creation of vast and informative phenomics databases.

  1. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  2. A multistream model of visual word recognition.

    PubMed

    Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie

    2009-02-01

    Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.

  3. Eye movements reveal epistemic curiosity in human observers.

    PubMed

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Action Experience Changes Attention to Kinematic Cues

    PubMed Central

    Filippi, Courtney A.; Woodward, Amanda L.

    2016-01-01

    The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-months-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation) about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue) or did not match the orientation of the rod (incongruent cue). To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first) × 2 (congruent kinematic cue vs. incongruent kinematic cue) between-subjects design. We show that 13-months-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics. PMID:26913012

  5. Just one look: Direct gaze briefly disrupts visual working memory.

    PubMed

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  6. Software Prototyping

    PubMed Central

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  7. Crosswatch: a System for Providing Guidance to Visually Impaired Travelers at Traffic Intersections

    PubMed Central

    Coughlan, James M.; Shen, Huiying

    2013-01-01

    Purpose This paper describes recent progress on the “Crosswatch” project, a smartphone-based system developed for providing guidance to blind and visually impaired travelers at traffic intersections. Building on past work on Crosswatch functionality to help the user achieve proper alignment with the crosswalk and read the status of walk lights to know when it is time to cross, we outline the directions Crosswatch is now taking to help realize its potential for becoming a practical system: namely, augmenting computer vision with other information sources, including geographic information systems (GIS) and sensor data, and inferring the user's location much more precisely than is possible through GPS alone, to provide a much larger range of information about traffic intersections to the pedestrian. Design/methodology/approach The paper summarizes past progress on Crosswatch and describes details about the development of new Crosswatch functionalities. One such functionality, which is required for determination of the user's precise location, is studied in detail, including the design of a suitable user interface to support this functionality and preliminary tests of this interface with visually impaired volunteer subjects. Findings The results of the tests of the new Crosswatch functionality demonstrate that the functionality is feasible in that it is usable by visually impaired persons. Research limitations/implications While the tests that were conducted of the new Crosswatch functionality are preliminary, the results of the tests have suggested several possible improvements, to be explored in the future. Practical implications The results described in this paper suggest that the necessary technologies used by the Crosswatch system are rapidly maturing, implying that the system has an excellent chance of becoming practical in the near future. Originality/value The paper addresses an innovative solution to a key problem faced by blind and visually impaired travelers, which has the potential to greatly improve independent travel for these individuals. PMID:24353745

  8. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    NASA Astrophysics Data System (ADS)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  9. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  10. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  11. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  12. iTTVis: Interactive Visualization of Table Tennis Data.

    PubMed

    Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui

    2018-01-01

    The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.

  13. Visual processing as a potential endophenotype in youths with attention-deficit/hyperactivity disorder: A sibling study design using the counting Stroop functional MRI.

    PubMed

    Fan, Li-Ying; Shang, Chi-Yung; Tseng, Wen-Yih Isaac; Gau, Susan Shur-Fen; Chou, Tai-Li

    2018-05-10

    Deficits in inhibitory control and visual processing are common in youths with attention-deficit/hyperactivity disorder (ADHD), but little is known about endophenotypes for unaffected siblings of youths with ADHD. This study aimed to investigate the potential endophenotypes of brain activation and performance in inhibitory control and visual processing among ADHD probands, their unaffected siblings, and neurotypical youths. We assessed 27 ADHD probands, 27 unaffected siblings, and 27 age-, gender-, and IQ-matched neurotypical youths using the counting Stroop functional magnetic resonance imaging and two tasks of the Cambridge Neuropsychological Test Automated Battery (CANTAB): rapid visual information processing (RVP) for inhibitory control and spatial span (SSP) for visual processing. ADHD probands showed greater activation than their unaffected siblings and neurotypical youths in the right inferior frontal gyrus (IFG) and anterior cingulate cortex. Increased activation in the right IFG was positively correlated with the mean latency of the RVP in ADHD probands. Moreover, ADHD probands and their unaffected siblings showed less activation in the left superior parietal lobule (SPL) than neurotypical youths. Increased activation in the left SPL was positively correlated with the spatial length of the SSP in neurotypical youths. Our findings suggest that less activation in the left SPL might be considered as a candidate imaging endophenotype for visual processing in ADHD. © 2018 Wiley Periodicals, Inc.

  14. Learned suppression for multiple distractors in visual search.

    PubMed

    Won, Bo-Yeong; Geng, Joy J

    2018-05-07

    Visual search for a target object occurs rapidly if there were no distractors to compete for attention, but this rarely happens in real-world environments. Distractors are almost always present and must be suppressed for target selection to succeed. Previous research suggests that one way this occurs is through the creation of a stimulus-specific distractor template. However, it remains unknown how information within such templates scale up with multiple distractors. Here we investigated the informational content of distractor templates created from repeated exposures to multiple distractors. We investigated this question using a visual search task in which participants searched for a gray square among colored squares. During "training," participants always saw the same set of colored distractors. During "testing," new distractor sets were interleaved with the trained distractors. The critical manipulation in each study was the distance (in color space) of the new test distractors from the trained distractors. We hypothesized that the pattern of distractor interference during testing would reveal the tuning of the suppression template: RTs should be commensurate with the degree to which distractor colors are encoded within the suppression template. Results from four experiments converged on the notion that the distractor template includes information about specific color values, but has broad "tuning," allowing suppression to generalize to new distractors. These results suggest that distractor templates, unlike target templates, encode multiple features and have broad representations, which have the advantage of generalizing suppression more easily to other potential distractors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Phobic spider fear is associated with enhanced attentional capture by spider pictures: a rapid serial presentation event-related potential study.

    PubMed

    Van Strien, Jan W; Franken, Ingmar H A; Huijding, Jorg

    2009-03-04

    The early posterior negativity (EPN) reflects early selective visual processing of emotionally significant information. This study explored the association between fear of spiders and the EPN for spider pictures. Fifty women completed a Spider Phobia Questionnaire and watched the random rapid serial presentation of 600 neutral, 600 negatively valenced emotional, and 600 spider pictures (three pictures per second). The EPN was scored as the mean activity in the 225-300-ms time window at lateral occipital electrodes. Participants with higher scores on the phobia questionnaire showed larger (i.e. more negative) EPN amplitudes in response to spider pictures. The results suggest that the attentional capture of spider-related stimuli is an automatic response, which is modulated by the extent of spider fear.

  16. Collective motion in animal groups from a neurobiological perspective: the adaptive benefits of dynamic sensory loads and selective attention.

    PubMed

    Lemasson, B H; Anderson, J J; Goodwin, R A

    2009-12-21

    We explore mechanisms associated with collective animal motion by drawing on the neurobiological bases of sensory information processing and decision-making. The model uses simplified retinal processes to translate neighbor movement patterns into information through spatial signal integration and threshold responses. The structure provides a mechanism by which individuals can vary their sets of influential neighbors, a measure of an individual's sensory load. Sensory loads are correlated with group order and density, and we discuss their adaptive values in an ecological context. The model also provides a mechanism by which group members can identify, and rapidly respond to, novel visual stimuli.

  17. Carolinas Coastal Change Processes Project data report for observations near Diamond Shoals, North Carolina, January-May 2009

    USGS Publications Warehouse

    Armstrong, Brandy N.; Warner, John C.; Voulgaris, George; List, Jeffrey H.; Thieler, E. Robert; Martini, Marinna A.; Montgomery, Ellyn T.

    2011-01-01

    This Open-File Report provides information collected for an oceanographic field study that occurred during January - May 2009 to investigate processes that control the sediment transport dynamics at Diamond Shoals, North Carolina. The objective of this report is to make the data available in digital form and to provide information to facilitate further analysis of the data. The report describes the background, experimental setup, equipment, and locations of the sensor deployments. The edited data are presented in time-series plots for rapid visualization of the data set, and in data files that are in the Network Common Data Format (netcdf). Supporting observational data are also included.

  18. An automated graphics tool for comparative genomics: the Coulson plot generator

    PubMed Central

    2013-01-01

    Background Comparative analysis is an essential component to biology. When applied to genomics for example, analysis may require comparisons between the predicted presence and absence of genes in a group of genomes under consideration. Frequently, genes can be grouped into small categories based on functional criteria, for example membership of a multimeric complex, participation in a metabolic or signaling pathway or shared sequence features and/or paralogy. These patterns of retention and loss are highly informative for the prediction of function, and hence possible biological context, and can provide great insights into the evolutionary history of cellular functions. However, representation of such information in a standard spreadsheet is a poor visual means from which to extract patterns within a dataset. Results We devised the Coulson Plot, a new graphical representation that exploits a matrix of pie charts to display comparative genomics data. Each pie is used to describe a complex or process from a separate taxon, and is divided into sectors corresponding to the number of proteins (subunits) in a complex/process. The predicted presence or absence of proteins in each complex are delineated by occupancy of a given sector; this format is visually highly accessible and makes pattern recognition rapid and reliable. A key to the identity of each subunit, plus hierarchical naming of taxa and coloring are included. A java-based application, the Coulson plot generator (CPG) automates graphic production, with a tab or comma-delineated text file as input and generating an editable portable document format or svg file. Conclusions CPG software may be used to rapidly convert spreadsheet data to a graphical matrix pie chart format. The representation essentially retains all of the information from the spreadsheet but presents a graphically rich format making comparisons and identification of patterns significantly clearer. While the Coulson plot format is highly useful in comparative genomics, its original purpose, the software can be used to visualize any dataset where entity occupancy is compared between different classes. Availability CPG software is available at sourceforge http://sourceforge.net/projects/coulson and http://dl.dropbox.com/u/6701906/Web/Sites/Labsite/CPG.html PMID:23621955

  19. Spatial distance effects on incremental semantic interpretation of abstract sentences: evidence from eye tracking.

    PubMed

    Guerra, Ernesto; Knoeferle, Pia

    2014-12-01

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. A visual approach to providing prognostic information to parents of children with retinoblastoma.

    PubMed

    Panton, Rachel L; Downie, Robert; Truong, Tran; Mackeen, Leslie; Kabene, Stefane; Yi, Qi-Long; Chan, Helen S L; Gallie, Brenda L

    2009-03-01

    Parents must rapidly assimilate complex information when a child is diagnosed with cancer. Education correlates with the ability to process and use medical information. Graphic tools aid reasoning and communicate complex ideas with precision and efficiency. We developed a graphic tool, DePICT (Disease-specific electronic Patient Illustrated Clinical Timeline), to visually display entire retinoblastoma treatment courses from real-time clinical data. We report retrospective evaluation of the effectiveness of DePICT to communicate risk and complexity of treatment to parents. We assembled DePICT graphics from multiple children on cards representing each stage of intraocular retinoblastoma. Forty-four parents completed a 14-item questionnaire to evaluate the understanding of retinoblastoma treatment and outcomes acquired from DePICT. As a proposed tool for informed consent, DePICT effectively communicated knowledge of complex medical treatment and risks, regardless of the education level. We identified multiple potential factors affecting parent comprehension of treatment complexity and risk. These include language proficiency (p=0.005) and age-related experience, as younger parents had higher education (p=0.021) but lower comprehension scores (p=0.011), regardless of first language. Provision of information at diagnosis concerning long-term treatment complexity helps parents of children with cancer. DePICT effectively transfers knowledge of treatments, risks, and prognosis in a manner that offsets parental educational disadvantages.

  1. Film and television in Croatia today: production, new technologies and the relationship with visual anthropology.

    PubMed

    Svilicić, Niksa; Vidacković, Zlatko

    2013-03-01

    This paper seeks to explain some of the most important recent production and technological changes that have affected the relationship between television and film, especially in Croatia, from the aspect of the development of visual anthropology. In the production segment, special attention was given to the role of Croatian television stations in the production of movies, "splitting" the movies into mini-series, interrupting movies with commercial breaks, and to television movies turned into feature films. This paper tries to perceive and define the structure of methodical processes of visual anthropology (reactive process). The development of photographic and film technology and the events which led to the rapid development of visual culture also point to the inseparable duality of observing visual anthropology within reactive and proactive processes, which are indirectly closely related to the technical aspects of these processes. Defining the technical aspect of visual anthropology as such "service" necessarily interferes with the author's approach in the domain of the script and direction related procedures during pre-production, on the field and during post-production of the movie. The author's approach is important because in dependence on it, the desired spectrum of information "output", susceptible to subsequent scientific analysis, is achieved. Lastly, another important segment is the "distributive-technological process" because, regardless of the approach to the anthropologically relevant phenomenon which is being dealt with in an audio-visual piece of work, it is essential that the work be presented and viewed adequately.

  2. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    PubMed Central

    Authié, Colas N.; Berthoz, Alain; Sahel, José-Alain; Safran, Avinoam B.

    2017-01-01

    In retinitis pigmentosa (RP), loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present), of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures. PMID:28798674

  3. Congenitally blind individuals rapidly adapt to coriolis force perturbations of their reaching movements

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Lackner, J. R.

    2000-01-01

    Reaching movements made to visual targets in a rotating room are initially deviated in path and endpoint in the direction of transient Coriolis forces generated by the motion of the arm relative to the rotating environment. With additional reaches, movements become progressively straighter and more accurate. Such adaptation can occur even in the absence of visual feedback about movement progression or terminus. Here we examined whether congenitally blind and sighted subjects without visual feedback would demonstrate adaptation to Coriolis forces when they pointed to a haptically specified target location. Subjects were tested pre-, per-, and postrotation at 10 rpm counterclockwise. Reaching to straight ahead targets prerotation, both groups exhibited slightly curved paths. Per-rotation, both groups showed large initial deviations of movement path and curvature but within 12 reaches on average had returned to prerotation curvature levels and endpoints. Postrotation, both groups showed mirror image patterns of curvature and endpoint to the per-rotation pattern. The groups did not differ significantly on any of the performance measures. These results provide compelling evidence that motor adaptation to Coriolis perturbations can be achieved on the basis of proprioceptive, somatosensory, and motor information in the complete absence of visual experience.

  4. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  5. Gps-Denied Geo-Localisation Using Visual Odometry

    NASA Astrophysics Data System (ADS)

    Gupta, Ashish; Chang, Huan; Yilmaz, Alper

    2016-06-01

    The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.

  6. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  7. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  8. Visualizing Clonal Evolution in Cancer.

    PubMed

    Krzywinski, Martin

    2016-06-02

    Rapid and inexpensive single-cell sequencing is driving new visualizations of cancer instability and evolution. Krzywinski discusses how to present clone evolution plots in order to visualize temporal, phylogenetic, and spatial aspects of a tumor in a single static image. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  10. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    PubMed Central

    King, Zachary A.; Dräger, Andreas; Ebrahim, Ali; Sonnenschein, Nikolaus; Lewis, Nathan E.; Palsson, Bernhard O.

    2015-01-01

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics). Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools. PMID:26313928

  11. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE PAGES

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali; ...

    2015-08-27

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  13. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  14. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  15. Visualization and Enabling Science at PO.DAAC

    NASA Astrophysics Data System (ADS)

    Tauer, E.; To, C.

    2017-12-01

    Facilitating the identification of appropriate data for scientific inquiry is important for efficient progress, but mechanisms for that identification vary, as does the effectiveness of those mechanisms. Appropriately crafted visualizations provide the means to quickly assess science data and scientific features, but providing the right visualization to the right application can present challenges. Even greater is the challenge of generating and/or re-constituting visualizations on the fly, particularly for large datasets. One avenue to mitigate the challenge is to arrive at an optimized intermediate data format that is tuned for rapid processing without sacrificing the provenance trace back to the original source data. This presentation will discuss the results of trading several current approaches towards an intermediate data format, and suggest a list of key attributes that will facilitate rapid visualization, and in the process, facilitate the identification of the right data for a given application.

  16. Tablets at the bedside - iPad-based visual field test used in the diagnosis of Intrasellar Haemangiopericytoma: a case report.

    PubMed

    Nesaratnam, Nisha; Thomas, Peter B M; Kirollos, Ramez; Vingrys, Algis J; Kong, George Y X; Martin, Keith R

    2017-04-24

    In the assessment of a pituitary mass, objective visual field testing represents a valuable means of evaluating mass effect, and thus in deciding whether surgical management is warranted. In this vignette, we describe a 73 year-old lady who presented with a three-week history of frontal headache, and 'blurriness' in the left side of her vision, due to a WHO grade III anaplastic haemangiopericytoma compressing the optic chiasm. We report how timely investigations, including an iPad-based visual field test (Melbourne Rapid Field, (MRF)) conducted at the bedside aided swift and appropriate management of the patient. We envisage such a test having a role in assessing bed-bound patients in hospital where access to formal visual field testing is difficult, or indeed in rapid testing of visual fields at the bedside to screen for post-operative complications, such as haematoma.

  17. The representation of information about faces in the temporal and frontal lobes.

    PubMed

    Rolls, Edmund T

    2007-01-07

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximising the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalisation and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. A theory is described of how such invariant representations may be produced in a hierarchically organised set of visual cortical areas with convergent connectivity. The theory proposes that neurons in these visual areas use a modified Hebb synaptic modification rule with a short-term memory trace to capture whatever can be captured at each stage that is invariant about objects as the objects change in retinal view, position, size and rotation. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the orbitofrontal cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.

  18. High Performance Visualization using Query-Driven Visualizationand Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Campbell, Scott; Dart, Eli

    2006-06-15

    Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.

  19. Visual memory and sustained attention impairment in youths with autism spectrum disorders.

    PubMed

    Chien, Y-L; Gau, S S-F; Shang, C-Y; Chiu, Y-N; Tsai, W-C; Wu, Y-Y

    2015-08-01

    An uneven neurocognitive profile is a hallmark of autism spectrum disorder (ASD). Studies focusing on the visual memory performance in ASD have shown controversial results. We investigated visual memory and sustained attention in youths with ASD and typically developing (TD) youths. We recruited 143 pairs of youths with ASD (males 93.7%; mean age 13.1, s.d. 3.5 years) and age- and sex-matched TD youths. The ASD group consisted of 67 youths with autistic disorder (autism) and 76 with Asperger's disorder (AS) based on the DSM-IV criteria. They were assessed using the Cambridge Neuropsychological Test Automated Battery involving the visual memory [spatial recognition memory (SRM), delayed matching to sample (DMS), paired associates learning (PAL)] and sustained attention (rapid visual information processing; RVP). Youths with ASD performed significantly worse than TD youths on most of the tasks; the significance disappeared in the superior intelligence quotient (IQ) subgroup. The response latency on the tasks did not differ between the ASD and TD groups. Age had significant main effects on SRM, DMS, RVP and part of PAL tasks and had an interaction with diagnosis in DMS and RVP performance. There was no significant difference between autism and AS on visual tasks. Our findings implied that youths with ASD had a wide range of visual memory and sustained attention impairment that was moderated by age and IQ, which supports temporal and frontal lobe dysfunction in ASD. The lack of difference between autism and AS implies that visual memory and sustained attention cannot distinguish these two ASD subtypes, which supports DSM-5 ASD criteria.

  20. The human factor in mining reclamation

    USGS Publications Warehouse

    Arbogast, Belinda F.; Knepper, Daniel H.; Langer, William H.

    2000-01-01

    Rapid urbanization of the landscape results in less space available for wildlife habitat, agriculture, and recreation. Mineral resources (especially nonmetallic construction materials) become unrecoverable due to inaccessibility caused by development. This report both describes mine sites with serious problems and draws attention to thoughtful reclamation projects for better future management. It presents information from selected sites in terms of their history, landform, design approach, and visual discernment. Examples from Colorado are included to introduce the broader issue of regions soundly developing mining sites, permitting the best utilization of natural resources, and respecting the landscape.

  1. Graphical function mapping as a new way to explore cause-and-effect chains

    USGS Publications Warehouse

    Evans, Mary Anne

    2016-01-01

    Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.

  2. Dynamic Encoding of Face Information in the Human Fusiform Gyrus

    PubMed Central

    Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark

    2014-01-01

    Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825

  3. Dynamic encoding of face information in the human fusiform gyrus.

    PubMed

    Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark

    2014-12-08

    Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.

  4. Robust visual tracking using a contextual boosting approach

    NASA Astrophysics Data System (ADS)

    Jiang, Wanyue; Wang, Yin; Wang, Daobo

    2018-03-01

    In recent years, detection-based image trackers have been gaining ground rapidly, thanks to its capacity of incorporating a variety of image features. Nevertheless, its tracking performance might be compromised if background regions are mislabeled as foreground in the training process. To resolve this problem, we propose an online visual tracking algorithm designated to improving the training label accuracy in the learning phase. In the proposed method, superpixels are used as samples, and their ambiguous labels are reassigned in accordance with both prior estimation and contextual information. The location and scale of the target are usually determined by confidence map, which is doomed to shrink since background regions are always incorporated into the bounding box. To address this dilemma, we propose a cross projection scheme via projecting the confidence map for target detecting. Moreover, the performance of the proposed tracker can be further improved by adding rigid-structured information. The proposed method is evaluated on the basis of the OTB benchmark and the VOT2016 benchmark. Compared with other trackers, the results appear to be competitive.

  5. Longitudinal Patent Analysis for Nanoscale Science and Engineering: Country, Institution and Technology Field

    NASA Astrophysics Data System (ADS)

    Huang, Zan; Chen, Hsinchun; Yip, Alan; Ng, Gavin; Guo, Fei; Chen, Zhi-Kai; Roco, Mihail C.

    2003-08-01

    Nanoscale science and engineering (NSE) and related areas have seen rapid growth in recent years. The speed and scope of development in the field have made it essential for researchers to be informed on the progress across different laboratories, companies, industries and countries. In this project, we experimented with several analysis and visualization techniques on NSE-related United States patent documents to support various knowledge tasks. This paper presents results on the basic analysis of nanotechnology patents between 1976 and 2002, content map analysis and citation network analysis. The data have been obtained on individual countries, institutions and technology fields. The top 10 countries with the largest number of nanotechnology patents are the United States, Japan, France, the United Kingdom, Taiwan, Korea, the Netherlands, Switzerland, Italy and Australia. The fastest growth in the last 5 years has been in chemical and pharmaceutical fields, followed by semiconductor devices. The results demonstrate potential of information-based discovery and visualization technologies to capture knowledge regarding nanotechnology performance, transfer of knowledge and trends of development through analyzing the patent documents.

  6. Gene Graphics: a genomic neighborhood data visualization web application.

    PubMed

    Harrison, Katherine J; Crécy-Lagard, Valérie de; Zallot, Rémi

    2018-04-15

    The examination of gene neighborhood is an integral part of comparative genomics but no tools to produce publication quality graphics of gene clusters are available. Gene Graphics is a straightforward web application for creating such visuals. Supported inputs include National Center for Biotechnology Information gene and protein identifiers with automatic fetching of neighboring information, GenBank files and data extracted from the SEED database. Gene representations can be customized for many parameters including gene and genome names, colors and sizes. Gene attributes can be copied and pasted for rapid and user-friendly customization of homologous genes between species. In addition to Portable Network Graphics and Scalable Vector Graphics, produced representations can be exported as Tagged Image File Format or Encapsulated PostScript, formats that are standard for publication. Hands-on tutorials with real life examples inspired from publications are available for training. Gene Graphics is freely available at https://katlabs.cc/genegraphics/ and source code is hosted at https://github.com/katlabs/genegraphics. katherinejh@ufl.edu or remizallot@ufl.edu. Supplementary data are available at Bioinformatics online.

  7. Infant information processing and family history of specific language impairment: converging evidence for RAP deficits from two paradigms

    PubMed Central

    Choudhury, Naseem; Leppanen, Paavo H.T.; Leevers, Hilary J.; Benasich, April A.

    2007-01-01

    An infant’s ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific language impairment (FH+) and 29 control infants (FH−) participated in this study. Infants’ performance on two different RAP paradigms (head-turn procedure [HT] and auditory-visual habituation/recognition memory [AVH/RM]) and on a global processing task (visual habituation/recognition memory [VH/RM]) was assessed at 6 and 9 months. Toddler language and cognitive skills were evaluated at 12 and 16 months. A number of significant group differences were seen: FH+ infants showed significantly poorer discrimination of fast rate stimuli on both RAP tasks, took longer to habituate on both habituation/recognition memory measures, and had lower novelty preference scores on the visual habituation/recognition memory task. Infants’ performance on the two RAP measures provided independent but converging contributions to outcome. Thus, different mechanisms appear to underlie performance on operantly conditioned tasks as compared to habituation/recognition memory paradigms. Further, infant RAP processing abilities predicted to 12- and 16-month language scores above and beyond family history of SLI. The results of this study provide additional support for the validity of infant RAP abilities as a behavioral marker for later language outcome. Finally, this is the first study to use a battery of infant tasks to demonstrate multi-modal processing deficits in infants at risk for SLI. PMID:17286846

  8. Comparison of capture-recapture and visual count indices of prairie dog densities in black-footed ferret habitat

    USGS Publications Warehouse

    Fagerstone, Kathleen A.; Biggins, Dean E.

    1986-01-01

    Black-footed ferrets (Mustela nigripes) are dependent on prairie dogs (Cynomys spp.) for food and on their burrows for shelter and rearing young. A stable prairie dog population may therefore be the most important factor determining the survival of ferrets. A rapid method of determining prairie dog density would be useful for assessing prairie dog density in colonies currently occupied by ferrets and for selecting prairie dog colonies in other areas for ferret translocation. This study showed that visual counts can provide a rapid density estimate. Visual counts of white-tailed prairie dogs (Cynomys leucurus) were significantly correlated (r = 0.95) with mark-recapture population density estimates on two study areas near Meeteetse, Wyoming. Suggestions are given for use of visual counts.

  9. Comparison of visual microscopic and computer-automated fluorescence detection of rabies virus neutralizing antibodies.

    PubMed

    Péharpré, D; Cliquet, F; Sagné, E; Renders, C; Costy, F; Aubert, M

    1999-07-01

    The rapid fluorescent focus inhibition test (RFFIT) and the fluorescent antibody virus neutralization test (FAVNT) are both diagnostic tests for determining levels of rabies neutralizing antibodies. An automated method for determining fluorescence has been implemented to reduce the work time required for fluorescent visual microscopic observations. The automated method offers several advantages over conventional visual observation, such as the ability to rapidly test many samples. The antibody titers obtained with automated techniques were similar to those obtained with both the RFFIT (n = 165, r = 0.93, P < 0.001) and the FAVNT (n = 52, r = 0.99, P < 0.001).

  10. The contribution of visual and vestibular information to spatial orientation by 6- to 14-month-old infants and adults.

    PubMed

    Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi

    2011-09-01

    Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.

  11. LastQuake: a comprehensive strategy for rapid engagement of earthquake eyewitnesses, massive crowdsourcing and risk reduction

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Steed, R.; Frobert, L.

    2015-12-01

    LastQuake is a smartphone app, browser add-on and the most sophisticated Twitter robot (quakebot) for earthquakes currently in operation. It fulfills eyewitnesses' needs by offering information on felt earthquakes and their effects within tens of seconds of their occurrence. Associated with an active presence on Facebook, Pinterest and on websites, this proves a very efficient engagement strategy. For example, the app was installed thousands of times after the Ghorka earthquake in Nepal. Language barriers have been erased by using visual communication; for example, felt reports are collected through a set of cartoons representing different shaking levels. Within 3 weeks of the magnitude 7.8 Ghorka earthquakes, 7,000 felt reports with thousands of comments were collected related to the mainshock and tens of its aftershocks as well as 100 informative geo-located pics. The QuakeBot was essential in allowing us to be identified so well and interact with those affected. LastQuake is also a risk reduction tool since it provides rapid information. Rapid information is similar to prevention since when it does not exist, disasters can happen. When no information is available after a felt earthquake, the public block emergency lines by trying to find out the cause of the shaking, crowds form potentially leading to unpredictable crowd movement, rumors spread. In its next release LastQuake will also provide people with guidance immediately after a shaking through a number of pop-up cartoons illustrating "do/don't do" items (go to open places, do not phone emergency services except if people are injured…). LastQuake's app design is simple and intuitive and has a global audience. It benefited from a crowdfunding campaign (and the support of the Fondation MAIF) and more improvements have been planned after an online feedback campaign organized in early June with the Ghorka earthquake eyewitnesses. LastQuake is also a seismic risk reduction tools thanks to its very rapid information. When such information does not exist, people tend to call emergency services, crowds emerge and rumors spread. In its next release, LastQuake will also have "do/don't do" cartoons popping up after an earthquake to encourage appropriate behavior.

  12. Early Detection of Clinically Significant Prostate Cancer Using Ultrasonic Acoustic Radiation Force Impulse (ARFI) Imaging

    DTIC Science & Technology

    2017-10-01

    Toolkit for rapid 3D visualization and image volume interpretation, followed by automated transducer positioning in a user-selected image plane for... Toolkit (IGSTK) to enable rapid 3D visualization and image volume interpretation followed by automated transducer positioning in the user-selected... careers in science, technology, and the humanities. What do you plan to do during the next reporting period to accomplish the goals? If this

  13. Selection Difficulty and Interitem Competition Are Independent Factors in Rapid Visual Stream Perception

    ERIC Educational Resources Information Center

    Kawahara, Jun-ichiro; Enns, James T.

    2009-01-01

    When observers try to identify successive targets in a visual stream at a rate of 100 ms per item, accuracy for the 2nd target is impaired for intertarget lags of 100-500 ms. Yet, when the same stream is presented more rapidly (e.g., 50 ms per item), this pattern reverses and a 1st-target deficit is obtained. M. C. Potter, A. Staub, and D. H.…

  14. EzMol: A Web Server Wizard for the Rapid Visualization and Image Production of Protein and Nucleic Acid Structures.

    PubMed

    Reynolds, Christopher R; Islam, Suhail A; Sternberg, Michael J E

    2018-01-31

    EzMol is a molecular visualization Web server in the form of a software wizard, located at http://www.sbg.bio.ic.ac.uk/ezmol/. It is designed for easy and rapid image manipulation and display of protein molecules, and is intended for users who need to quickly produce high-resolution images of protein molecules but do not have the time or inclination to use a software molecular visualization system. EzMol allows the upload of molecular structure files in PDB format to generate a Web page including a representation of the structure that the user can manipulate. EzMol provides intuitive options for chain display, adjusting the color/transparency of residues, side chains and protein surfaces, and for adding labels to residues. The final adjusted protein image can then be downloaded as a high-resolution image. There are a range of applications for rapid protein display, including the illustration of specific areas of a protein structure and the rapid prototyping of images. Copyright © 2018. Published by Elsevier Ltd.

  15. Visual Processing Deficits in Children with Slow RAN Performance

    ERIC Educational Resources Information Center

    Stainthorp, Rhona; Stuart, Morag; Powell, Daisy; Quinlan, Philip; Garwood, Holly

    2010-01-01

    Two groups of 8- to 10-year-olds differing in rapid automatized naming speed but matched for age, verbal and nonverbal ability, phonological awareness, phonological memory, and visual acuity participated in four experiments investigating early visual processing. As low RAN children had significantly slower simple reaction times (SRT) this was…

  16. Youth with Visual Impairments: Experiences in General Physical Education

    ERIC Educational Resources Information Center

    Lieberman, Lauren J.; Robinson, Barbara L.; Rollheiser, Heidi

    2006-01-01

    The rapid increase in the number of students with visual impairments currently being educated in inclusive general physical education makes it important that physical education instructors know how best to serve them. Assessment of the experiences of students with visual impairments during general physical education classes, knowledge of students'…

  17. The Creation of a global telemedical information society.

    PubMed

    Marsh, A

    1998-04-01

    Healthcare is a major candidate for improvement in any vision of the kinds of 'information highways' and 'information societies' that are now being visualized. The medical information management market is one of the largest and fastest growing segments of the healthcare device industry. The expected revenue by the year 2000 is US$21 billion. Telemedicine currently accounts for only a small segment but is expanding rapidly. In the USA more than 60% of federal telemedicine projects were initiated in the last 2 years. The concept of telemedicine captures much of what is developing in terms of technology implementations, especially if it is combined with the growth of the Internet and World Wide Web (WWW). It is foreseen that the World Wide Web (WWW) will become the most important communication medium of any future information society. If the development of such a society is to be on a global scale it should not be allowed to develop in an ad hoc manner. For this reason, the Euromed Project has identified 20 building blocks resulting in 39 steps requiring multi-disciplinary collaborations. Since, the organization of information is therefore critical especially when concerning healthcare the Euromed Project has also introduced a new (global) standard called 'Virtual Medical Worlds' which provides the potential to organize existing medical information and provide the foundations for its integration into future forms of medical information systems. Virtual Medical Worlds, based on 3D reconstructed medical models, utilizes the WWW as a navigational medium to remotely access multi-media medical information systems. The visualization and manipulation of hyper-graphical 3D 'body/organ' templates and patient-specific 3D/4D/and VR models is an attempt to define an information infrastructure in an emerging WWW-based telemedical information society.

  18. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  19. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    PubMed

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of the web-portal, we conclude that this novel approach is a very effective, fast, informative, and accurate way to present, disseminate, and document cultural heritage objects.

  20. [Telemedicine in dermatological practice: teledermatology].

    PubMed

    Danis, Judit; Forczek, Erzsébet; Bari, Ferenc

    2016-03-06

    Technological advances in the fields of information and telecommunication technologies have affected the health care system in the last decades, and lead to the emergence of a new discipline: telemedicine. The appearance and rise of internet and smart phones induced a rapid progression in telemedicine. Several new applications and mobile devices are published every hour even for medical purposes. Parallel to these changes in the technical fields, medical literature about telemedicine has grown rapidly. Due to its visual nature, dermatology is ideally suited to benefit from this new technology and teledermatology became one of the most dynamically evolving fields of telemedicine by now. Teledermatology is not routinely practiced in Hungary yet, however, it promises the health care system to become better, cheaper and faster, but we have to take notice on the experience and problems faced in teledermatologic applications so far, summarized in this review.

  1. Annotation and visualization of endogenous retroviral sequences using the Distributed Annotation System (DAS) and eBioX

    PubMed Central

    Martínez Barrio, Álvaro; Lagercrantz, Erik; Sperber, Göran O; Blomberg, Jonas; Bongcam-Rudloff, Erik

    2009-01-01

    Background The Distributed Annotation System (DAS) is a widely used network protocol for sharing biological information. The distributed aspects of the protocol enable the use of various reference and annotation servers for connecting biological sequence data to pertinent annotations in order to depict an integrated view of the data for the final user. Results An annotation server has been devised to provide information about the endogenous retroviruses detected and annotated by a specialized in silico tool called RetroTector. We describe the procedure to implement the DAS 1.5 protocol commands necessary for constructing the DAS annotation server. We use our server to exemplify those steps. Data distribution is kept separated from visualization which is carried out by eBioX, an easy to use open source program incorporating multiple bioinformatics utilities. Some well characterized endogenous retroviruses are shown in two different DAS clients. A rapid analysis of areas free from retroviral insertions could be facilitated by our annotations. Conclusion The DAS protocol has shown to be advantageous in the distribution of endogenous retrovirus data. The distributed nature of the protocol is also found to aid in combining annotation and visualization along a genome in order to enhance the understanding of ERV contribution to its evolution. Reference and annotation servers are conjointly used by eBioX to provide visualization of ERV annotations as well as other data sources. Our DAS data source can be found in the central public DAS service repository, , or at . PMID:19534743

  2. A new pattern associative memory model for image recognition based on Hebb rules and dot product

    NASA Astrophysics Data System (ADS)

    Gao, Mingyue; Deng, Limiao; Wang, Yanjiang

    2018-04-01

    A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.

  3. VAST Challenge 2016: Streaming Visual Analytics

    DTIC Science & Technology

    2016-10-25

    understand rapidly evolving situations. To support such tasks, visual analytics solutions must move well beyond systems that simply provide real-time...received. Mini-Challenge 1: Design Challenge Mini-Challenge 1 focused on systems to support security and operational analytics at the Euybia...Challenge 1 was to solicit novel approaches for streaming visual analytics that push the boundaries for what constitutes a visual analytics system , and to

  4. Model system for plant cell biology: GFP imaging in living onion epidermal cells

    NASA Technical Reports Server (NTRS)

    Scott, A.; Wyatt, S.; Tsou, P. L.; Robertson, D.; Allen, N. S.

    1999-01-01

    The ability to visualize organelle localization and dynamics is very useful in studying cellular physiological events. Until recently, this has been accomplished using a variety of staining methods. However, staining can give inaccurate information due to nonspecific staining, diffusion of the stain or through toxic effects. The ability to target green fluorescent protein (GFP) to various organelles allows for specific labeling of organelles in vivo. The disadvantages of GFP thus far have been the time and money involved in developing stable transformants or maintaining cell cultures for transient expression. In this paper, we present a rapid transient expression system using onion epidermal peels. We have localized GFP to various cellular compartments (including the cell wall) to illustrate the utility of this method and to visualize dynamics of these compartments. The onion epidermis has large, living, transparent cells in a monolayer, making them ideal for visualizing GFP. This method is easy and inexpensive, and it allows for testing of new GFP fusion proteins in a living tissue to determine deleterious effects and the ability to express before stable transformants are attempted.

  5. Dashboard visualizations: Supporting real-time throughput decision-making.

    PubMed

    Franklin, Amy; Gantela, Swaroop; Shifarraw, Salsawit; Johnson, Todd R; Robinson, David J; King, Brent R; Mehta, Amit M; Maddow, Charles L; Hoot, Nathan R; Nguyen, Vickie; Rubio, Adriana; Zhang, Jiajie; Okafor, Nnaemeka G

    2017-07-01

    Providing timely and effective care in the emergency department (ED) requires the management of individual patients as well as the flow and demands of the entire department. Strategic changes to work processes, such as adding a flow coordination nurse or a physician in triage, have demonstrated improvements in throughput times. However, such global strategic changes do not address the real-time, often opportunistic workflow decisions of individual clinicians in the ED. We believe that real-time representation of the status of the entire emergency department and each patient within it through information visualizations will better support clinical decision-making in-the-moment and provide for rapid intervention to improve ED flow. This notion is based on previous work where we found that clinicians' workflow decisions were often based on an in-the-moment local perspective, rather than a global perspective. Here, we discuss the challenges of designing and implementing visualizations for ED through a discussion of the development of our prototype Throughput Dashboard and the potential it holds for supporting real-time decision-making. Copyright © 2017. Published by Elsevier Inc.

  6. Understanding interfirm relationships in business ecosystems with interactive visualization.

    PubMed

    Basole, Rahul C; Clear, Trustin; Hu, Mengdie; Mehrotra, Harshit; Stasko, John

    2013-12-01

    Business ecosystems are characterized by large, complex, and global networks of firms, often from many different market segments, all collaborating, partnering, and competing to create and deliver new products and services. Given the rapidly increasing scale, complexity, and rate of change of business ecosystems, as well as economic and competitive pressures, analysts are faced with the formidable task of quickly understanding the fundamental characteristics of these interfirm networks. Existing tools, however, are predominantly query- or list-centric with limited interactive, exploratory capabilities. Guided by a field study of corporate analysts, we have designed and implemented dotlink360, an interactive visualization system that provides capabilities to gain systemic insight into the compositional, temporal, and connective characteristics of business ecosystems. dotlink360 consists of novel, multiple connected views enabling the analyst to explore, discover, and understand interfirm networks for a focal firm, specific market segments or countries, and the entire business ecosystem. System evaluation by a small group of prototypical users shows supporting evidence of the benefits of our approach. This design study contributes to the relatively unexplored, but promising area of exploratory information visualization in market research and business strategy.

  7. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions

    PubMed Central

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-01-01

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481

  8. The neural bases of spatial frequency processing during scene perception

    PubMed Central

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  9. Biographer: web-based editing and rendering of SBGN compliant biochemical networks

    PubMed Central

    Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas

    2013-01-01

    Motivation: The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. Results: We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. Availability: The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-indepenent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL. Contact: edda.klipp@biologie.hu-berlin.de or handorf@physik.hu-berlin.de PMID:23574737

  10. Rapid feature-driven changes in the attentional window.

    PubMed

    Leonard, Carly J; Lopez-Calderon, Javier; Kreither, Johanna; Luck, Steven J

    2013-07-01

    Spatial attention must adjust around an object of interest in a manner that reflects the object's size on the retina as well as the proximity of distracting objects, a process often guided by nonspatial features. This study used ERPs to investigate how quickly the size of this type of "attentional window" can adjust around a fixated target object defined by its color and whether this variety of attention influences the feedforward flow of subsequent information through the visual system. The task involved attending either to a circular region at fixation or to a surrounding annulus region, depending on which region contained an attended color. The region containing the attended color varied randomly from trial to trial, so the spatial distribution of attention had to be adjusted on each trial. We measured the initial sensory ERP response elicited by an irrelevant probe stimulus that appeared in one of the two regions at different times after task display onset. This allowed us to measure the amount of time required to adjust spatial attention on the basis of the location of the task-relevant feature. We found that the probe-elicited sensory response was larger when the probe occurred within the region of the attended dots, and this effect required a delay of approximately 175 msec between the onset of the task display and the onset of the probe. Thus, the window of attention is rapidly adjusted around the point of fixation in a manner that reflects the spatial extent of a task-relevant stimulus, leading to changes in the feedforward flow of subsequent information through the visual system.

  11. What whiteboards in a trauma center operating suite can teach us about emergency department communication.

    PubMed

    Xiao, Yan; Schenkel, Stephen; Faraj, Samer; Mackenzie, Colin F; Moss, Jacqueline

    2007-10-01

    Highly reliable, efficient collaborative work relies on excellent communication. We seek to understand how a traditional whiteboard is used as a versatile information artifact to support communication in rapid-paced, highly dynamic collaborative work. The similar communicative demands of the trauma operating suite and an emergency department (ED) make the findings applicable to both settings. We took photographs and observed staff's interaction with a whiteboard in a 6-bed surgical suite dedicated to trauma service. We analyzed the integral role of artifacts in cognitive activities as when workers configure and manage visual spaces to simplify their cognitive tasks. We further identified characteristics of the whiteboard as a communicative information artifact in supporting coordination in fast-paced environments. We identified 8 ways in which the whiteboard was used by physicians, nurses, and with other personnel to support collaborative work: task management, team attention management, task status tracking, task articulation, resource planning and tracking, synchronous and asynchronous communication, multidisciplinary problem solving and negotiation, and socialization and team building. The whiteboard was highly communicative because of its location and installation method, high interactivity and usability, high expressiveness, and ability to visualize transition points to support work handoffs. Traditional information artifacts such as whiteboards play significant roles in supporting collaborative work. How these artifacts are used provides insights into complicated information needs of teamwork in highly dynamic, high-risk settings such as an ED.

  12. The surprisingly high human efficiency at learning to recognize faces

    PubMed Central

    Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.

    2009-01-01

    We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918

  13. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  14. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  15. The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4.

    PubMed

    Fries, Pascal; Womelsdorf, Thilo; Oostenveld, Robert; Desimone, Robert

    2008-04-30

    Selective attention lends relevant sensory input priority access to higher-level brain areas and ultimately to behavior. Recent studies have suggested that those neurons in visual areas that are activated by an attended stimulus engage in enhanced gamma-band (30-70 Hz) synchronization compared with neurons activated by a distracter. Such precise synchronization could enhance the postsynaptic impact of cells carrying behaviorally relevant information. Previous studies have used the local field potential (LFP) power spectrum or spike-LFP coherence (SFC) to indirectly estimate spike synchronization. Here, we directly demonstrate zero-phase gamma-band coherence among spike trains of V4 neurons. This synchronization was particularly evident during visual stimulation and enhanced by selective attention, thus confirming the pattern inferred from LFP power and SFC. We therefore investigated the time course of LFP gamma-band power and found rapid dynamics consistent with interactions of top-down spatial and feature attention with bottom-up saliency. In addition to the modulation of synchronization during visual stimulation, selective attention significantly changed the prestimulus pattern of synchronization. Attention inside the receptive field of the recorded neuronal population enhanced gamma-band synchronization and strongly reduced alpha-band (9-11 Hz) synchronization in the prestimulus period. These results lend further support for a functional role of rhythmic neuronal synchronization in attentional stimulus selection.

  16. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  17. MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.

    PubMed

    He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper

    2018-07-26

    Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Analysis of Human Mobility Based on Cellular Data

    NASA Astrophysics Data System (ADS)

    Arifiansyah, F.; Saptawati, G. A. P.

    2017-01-01

    Nowadays not only adult but even teenager and children have then own mobile phones. This phenomena indicates that the mobile phone becomes an important part of everyday’s life. Based on these indication, the amount of cellular data also increased rapidly. Cellular data defined as the data that records communication among mobile phone users. Cellular data is easy to obtain because the telecommunications company had made a record of the data for the billing system of the company. Billing data keeps a log of the users cellular data usage each time. We can obtained information from the data about communication between users. Through data visualization process, an interesting pattern can be seen in the raw cellular data, so that users can obtain prior knowledge to perform data analysis. Cellular data processing can be done using data mining to find out human mobility patterns and on the existing data. In this paper, we use frequent pattern mining and finding association rules to observe the relation between attributes in cellular data and then visualize them. We used weka tools for finding the rules in stage of data mining. Generally, the utilization of cellular data can provide supporting information for the decision making process and become a data support to provide solutions and information needed by the decision makers.

  19. Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.

    1999-01-01

    We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.

  20. Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank; Malcolm, George L.

    2016-01-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…

  1. Metabolic Mapping of the Brain's Response to Visual Stimulation: Studies in Humans.

    ERIC Educational Resources Information Center

    Phelps, Michael E.; Kuhl, David E.

    1981-01-01

    Studies demonstrate increasing glucose metabolic rates in human primary (PVC) and association (AVC) visual cortex as complexity of visual scenes increase. AVC increased more rapidly with scene complexity than PVC and increased local metabolic activities above control subject with eyes closed; indicates wide range and metabolic reserve of visual…

  2. Dynamic Visualizations: How Attraction, Motivation and Communication Affect Streaming Video Tutorial Implementation

    ERIC Educational Resources Information Center

    Boger, Claire

    2011-01-01

    The rapid advancement in the capabilities of computer technologies has made it easier to design and deploy dynamic visualizations in web-based learning environments; yet, the implementation of these dynamic visuals has been met with mixed results. While many guidelines exist to assist instructional designers in the design and application of…

  3. Images in Language: Metaphors and Metamorphoses. Visual Learning. Volume 1

    ERIC Educational Resources Information Center

    Benedek, Andras, Ed.; Nyiri, Kristof, Ed.

    2011-01-01

    Learning and teaching are faced with radically new challenges in today's rapidly changing world and its deeply transformed communicational environment. We are living in an era of images. Contemporary visual technology--film, video, interactive digital media--is promoting but also demanding a new approach to education: the age of visual learning…

  4. Exploring the Integration of Data Mining and Data Visualization

    ERIC Educational Resources Information Center

    Zhang, Yi

    2011-01-01

    Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…

  5. Visual Imagery for Letters and Words. Final Report.

    ERIC Educational Resources Information Center

    Weber, Robert J.

    In a series of six experiments, undergraduate college students visually imagined letters or words and then classified as rapidly as possible the imagined letters for some physical property such as vertical height. This procedure allowed for a preliminary assessment of the temporal parameters of visual imagination. The results delineate a number of…

  6. SIMPLE: a sequential immunoperoxidase labeling and erasing method.

    PubMed

    Glass, George; Papin, Jason A; Mandell, James W

    2009-10-01

    The ability to simultaneously visualize expression of multiple antigens in cells and tissues can provide powerful insights into cellular and organismal biology. However, standard methods are limited to the use of just two or three simultaneous probes and have not been widely adopted for routine use in paraffin-embedded tissue. We have developed a novel approach called sequential immunoperoxidase labeling and erasing (SIMPLE) that enables the simultaneous visualization of at least five markers within a single tissue section. Utilizing the alcohol-soluble peroxidase substrate 3-amino-9-ethylcarbazole, combined with a rapid non-destructive method for antibody-antigen dissociation, we demonstrate the ability to erase the results of a single immunohistochemical stain while preserving tissue antigenicity for repeated rounds of labeling. SIMPLE is greatly facilitated by the use of a whole-slide scanner, which can capture the results of each sequential stain without any information loss.

  7. Sub-diffraction nano manipulation using STED AFM.

    PubMed

    Chacko, Jenu Varghese; Canale, Claudio; Harke, Benjamin; Diaspro, Alberto

    2013-01-01

    In the last two decades, nano manipulation has been recognized as a potential tool of scientific interest especially in nanotechnology and nano-robotics. Contemporary optical microscopy (super resolution) techniques have also reached the nanometer scale resolution to visualize this and hence a combination of super resolution aided nano manipulation ineluctably gives a new perspective to the scenario. Here we demonstrate how specificity and rapid determination of structures provided by stimulated emission depletion (STED) microscope can aid another microscopic tool with capability of mechanical manoeuvring, like an atomic force microscope (AFM) to get topological information or to target nano scaled materials. We also give proof of principle on how high-resolution real time visualization can improve nano manipulation capability within a dense sample, and how STED-AFM is an optimal combination for this job. With these evidences, this article points to future precise nano dissections and maybe even to a nano-snooker game with an AFM tip and fluorospheres.

  8. Exceptional preservation of eye structure in arthropod visual predators from the Middle Jurassic

    PubMed Central

    Vannier, Jean; Schoenemann, Brigitte; Gillot, Thomas; Charbonnier, Sylvain; Clarkson, Euan

    2016-01-01

    Vision has revolutionized the way animals explore their environment and interact with each other and rapidly became a major driving force in animal evolution. However, direct evidence of how ancient animals could perceive their environment is extremely difficult to obtain because internal eye structures are almost never fossilized. Here, we reconstruct with unprecedented resolution the three-dimensional structure of the huge compound eye of a 160-million-year-old thylacocephalan arthropod from the La Voulte exceptional fossil biota in SE France. This arthropod had about 18,000 lenses on each eye, which is a record among extinct and extant arthropods and is surpassed only by modern dragonflies. Combined information about its eyes, internal organs and gut contents obtained by X-ray microtomography lead to the conclusion that this thylacocephalan arthropod was a visual hunter probably adapted to illuminated environments, thus contradicting the hypothesis that La Voulte was a deep-water environment. PMID:26785293

  9. MRMer, an interactive open source and cross-platform system for data extraction and visualization of multiple reaction monitoring experiments.

    PubMed

    Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin

    2008-11-01

    Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.

  10. Buildup of spatial information over time and across eye-movements.

    PubMed

    Zimmermann, Eckart; Morrone, M Concetta; Burr, David C

    2014-12-15

    To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Hyperspectral imaging for non-contact analysis of forensic traces.

    PubMed

    Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G

    2012-11-30

    Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Exceptional preservation of eye structure in arthropod visual predators from the Middle Jurassic.

    PubMed

    Vannier, Jean; Schoenemann, Brigitte; Gillot, Thomas; Charbonnier, Sylvain; Clarkson, Euan

    2016-01-19

    Vision has revolutionized the way animals explore their environment and interact with each other and rapidly became a major driving force in animal evolution. However, direct evidence of how ancient animals could perceive their environment is extremely difficult to obtain because internal eye structures are almost never fossilized. Here, we reconstruct with unprecedented resolution the three-dimensional structure of the huge compound eye of a 160-million-year-old thylacocephalan arthropod from the La Voulte exceptional fossil biota in SE France. This arthropod had about 18,000 lenses on each eye, which is a record among extinct and extant arthropods and is surpassed only by modern dragonflies. Combined information about its eyes, internal organs and gut contents obtained by X-ray microtomography lead to the conclusion that this thylacocephalan arthropod was a visual hunter probably adapted to illuminated environments, thus contradicting the hypothesis that La Voulte was a deep-water environment.

  13. Mixture Model and MDSDCA for Textual Data

    NASA Astrophysics Data System (ADS)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  14. [Eccentricity-dependent influence of amodal completion on visual search].

    PubMed

    Shirama, Aya; Ishiguchi, Akira

    2009-06-01

    Does amodal completion occur homogeneously across the visual field? Rensink and Enns (1998) found that visual search for efficiently-detected fragments became inefficient when observers perceived the fragments as a partially-occluded version of a distractor due to a rapid completion process. We examined the effect of target eccentricity in Rensink and Enns's tasks and a few additional tasks by magnifying the stimuli in the peripheral visual field to compensate for the loss of spatial resolution (M-scaling; Rovamo & Virsu, 1979). We found that amodal completion disrupted the efficient search for the salient fragments (i.e., target) even when the target was presented at high eccentricity (within 17 deg). In addition, the configuration effect of the fragments, which produced amodal completion, increased with eccentricity while the same target was detected efficiently at the lowest eccentricity. This eccentricity effect is different from a previously-reported eccentricity effect where M-scaling was effective (Carrasco & Frieder, 1997). These findings indicate that the visual system has a basis for rapid completion across the visual field, but the stimulus representations constructed through amodal completion have eccentricity-dependent properties.

  15. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  16. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  17. Memory reactivation during rapid eye movement sleep promotes its generalization and integration in cortical stores.

    PubMed

    Sterpenich, Virginie; Schmidt, Christina; Albouy, Geneviève; Matarazzo, Luca; Vanhaudenhuyse, Audrey; Boveroux, Pierre; Degueldre, Christian; Leclercq, Yves; Balteau, Evelyne; Collette, Fabienne; Luxen, André; Phillips, Christophe; Maquet, Pierre

    2014-06-01

    Memory reactivation appears to be a fundamental process in memory consolidation. In this study we tested the influence of memory reactivation during rapid eye movement (REM) sleep on memory performance and brain responses at retrieval in healthy human participants. Fifty-six healthy subjects (28 women and 28 men, age [mean ± standard deviation]: 21.6 ± 2.2 y) participated in this functional magnetic resonance imaging (fMRI) study. Auditory cues were associated with pictures of faces during their encoding. These memory cues delivered during REM sleep enhanced subsequent accurate recollections but also false recognitions. These results suggest that reactivated memories interacted with semantically related representations, and induced new creative associations, which subsequently reduced the distinction between new and previously encoded exemplars. Cues had no effect if presented during stage 2 sleep, or if they were not associated with faces during encoding. Functional magnetic resonance imaging revealed that following exposure to conditioned cues during REM sleep, responses to faces during retrieval were enhanced both in a visual area and in a cortical region of multisensory (auditory-visual) convergence. These results show that reactivating memories during REM sleep enhances cortical responses during retrieval, suggesting the integration of recent memories within cortical circuits, favoring the generalization and schematization of the information.

  18. Longitudinal trajectories of the representation and access to phonological information in bilingual children with specific language impairment.

    PubMed

    Buil-Legaz, Lucia; Aguilar-Mediavilla, Eva; Adrover-Roig, Daniel

    2016-10-01

    Language development in children with Specific Language Impairment (SLI) is still poorly understood, especially if children with SLI are bilingual. This study describes the longitudinal trajectory of several linguistic abilities in bilingual children with SLI relative to bilingual control children matched by their age and socioeconomic status. A set of measures of non-word repetition, sentence repetition, phonological awareness, rapid automatic naming and verbal fluency were collected at three time points, from 6-12 years of age using a prospective longitudinal design. Results revealed that, at all ages, children with SLI obtained lower values in measures of sentence repetition, non-word repetition, phonological fluency and phonological awareness (without visual cues) when compared to typically-developing children. Other measures, such as rapid automatic naming, improved over time, given that differences at 6 years of age did not persist at further moments of testing. Other linguistic measures, such as phonological awareness (with visual cues) and semantic fluency were equivalent between both groups across time. Children with SLI manifest persistent difficulties in tasks involved in manipulating segments of words and in maintaining verbal units active in phonological working memory, while other abilities, such as the access to underlying phonological representations are unaffected.

  19. Visualization of Space-Time Ambiguities to be Explored by NASA GEC Mission with a Critique of Synthesized Measurements for Different GEC Mission Scenarios

    NASA Technical Reports Server (NTRS)

    Sojka, Jan J.

    2003-01-01

    The Grant supported research addressing the question of how the NASA Solar Terrestrial Probes (STP) Mission called Geospace electrodynamics Connections (GEC) will resolve space-time structures as well as collect sufficient information to solve the coupled thermosphere-ionosphere- magnetosphere dynamics and electrodynamics. The approach adopted was to develop a high resolution in both space and time model of the ionosphere-thermosphere (I-T) over altitudes relevant to GEC, especially the deep-dipping phase. This I-T model was driven by a high- resolution model of magnetospheric-ionospheric (M-I) coupling electrodynamics. Such a model contains all the key parameters to be measured by GEC instrumentation, which in turn are the required parameters to resolve present-day problems in describing the energy and momentum coupling between the ionosphere-magnetosphere and ionosphere-thermosphere. This model database has been successfully created for one geophysical condition; winter, solar maximum with disturbed geophysical conditions, specifically a substorm. Using this data set, visualizations (movies) were created to contrast dynamics of the different measurable parameters. Specifically, the rapidly varying magnetospheric E and auroral electron precipitation versus the slower varying ionospheric F-region electron density, but rapidly responding E-region density.

  20. Generating Evidence for Program Planning: Rapid Assessment of Avoidable Blindness in Bangladesh.

    PubMed

    Muhit, Mohammad; Wadud, Zakia; Islam, Johurul; Khair, Zareen; Shamanna, B R; Jung, Jenny; Khandaker, Gulam

    2016-06-01

    There is a lack of data on the prevalence and causes of blindness in Bangladesh, which is important to plan effective eye health programs and advocate support services to achieve the goals of Vision 2020. We conducted a rapid assessment of avoidable blindness (RAAB) in 8 districts of Bangladesh (January 2010 - December 2012) to establish the prevalence and causes of blindness. People aged ≥50 years were selected, and eligible participants had visual acuity (VA) measured. Ocular examinations were performed in those with VA<6/18. Additional information was collected for those who had or had not undergone cataract surgery to understand service barriers and quality of service. In total, 21,596 people were examined, of which 471 (2.2%, 95% confidence interval, CI, 2.0-2.4%) were blind. The primary cause of blindness was cataract (75.8%). The majority of blindness (86.2%) was avoidable. Cataract and refractive error were the primary causes of severe visual impairment (73.6%) and moderate visual impairment (63.6%), respectively. Cataract surgical coverage for blind persons was 69.3% (males 76.6%, females 64.3%, P<0.001). The magnitude of blindness among people aged ≥50 years was estimated to be 563,200 people (95% CI 512,000-614,400), of whom 426,342 had un-operated cataract. In Bangladesh, the majority of blindness (86.2%) among people aged ≥50 years was avoidable, and cataract was the most important cause of avoidable blindness. Improving cataract surgical services and refraction services would be the most important step towards the elimination of avoidable blindness in Bangladesh.

  1. Brain activation for reading and listening comprehension: An fMRI study of modality effects and individual differences in language comprehension

    PubMed Central

    Buchweitz, Augusto; Mason, Robert A.; Tomitch, Lêda M. B.; Just, Marcel Adam

    2010-01-01

    The study compared the brain activation patterns associated with the comprehension of written and spoken Portuguese sentences. An fMRI study measured brain activity while participants read and listened to sentences about general world knowledge. Participants had to decide if the sentences were true or false. To mirror the transient nature of spoken sentences, visual input was presented in rapid serial visual presentation format. The results showed a common core of amodal left inferior frontal and middle temporal gyri activation, as well as modality specific brain activation associated with listening and reading comprehension. Reading comprehension was associated with more left-lateralized activation and with left inferior occipital cortex (including fusiform gyrus) activation. Listening comprehension was associated with extensive bilateral temporal cortex activation and more overall activation of the whole cortex. Results also showed individual differences in brain activation for reading comprehension. Readers with lower working memory capacity showed more activation of right-hemisphere areas (spillover of activation) and more activation in the prefrontal cortex, potentially associated with more demand placed on executive control processes. Readers with higher working memory capacity showed more activation in a frontal-posterior network of areas (left angular and precentral gyri, and right inferior frontal gyrus). The activation of this network may be associated with phonological rehearsal of linguistic information when reading text presented in rapid serial visual format. The study demonstrates the modality fingerprints for language comprehension and indicates how low- and high working memory capacity readers deal with reading text presented in serial format. PMID:21526132

  2. How humans use visual optic flow to regulate stepping during walking.

    PubMed

    Salinas, Mandy M; Wilken, Jason M; Dingwell, Jonathan B

    2017-09-01

    Humans use visual optic flow to regulate average walking speed. Among many possible strategies available, healthy humans walking on motorized treadmills allow fluctuations in stride length (L n ) and stride time (T n ) to persist across multiple consecutive strides, but rapidly correct deviations in stride speed (S n =L n /T n ) at each successive stride, n. Several experiments verified this stepping strategy when participants walked with no optic flow. This study determined how removing or systematically altering optic flow influenced peoples' stride-to-stride stepping control strategies. Participants walked on a treadmill with a virtual reality (VR) scene projected onto a 3m tall, 180° semi-cylindrical screen in front of the treadmill. Five conditions were tested: blank screen ("BLANK"), static scene ("STATIC"), or moving scene with optic flow speed slower than ("SLOW"), matched to ("MATCH"), or faster than ("FAST") walking speed. Participants took shorter and faster strides and demonstrated increased stepping variability during the BLANK condition compared to the other conditions. Thus, when visual information was removed, individuals appeared to walk more cautiously. Optic flow influenced both how quickly humans corrected stride speed deviations and how successful they were at enacting this strategy to try to maintain approximately constant speed at each stride. These results were consistent with Weber's law: healthy adults more-rapidly corrected stride speed deviations in a no optic flow condition (the lower intensity stimuli) compared to contexts with non-zero optic flow. These results demonstrate how the temporal characteristics of optic flow influence ability to correct speed fluctuations during walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  4. Endogenous spatial attention: evidence for intact functioning in adults with autism

    PubMed Central

    Grubb, Michael A.; Behrmann, Marlene; Egan, Ryan; Minshew, Nancy J.; Carrasco, Marisa; Heeger, David J.

    2012-01-01

    Lay Abstract Attention allows us to selectively process the vast amount of information with which we are confronted. Focusing on a certain location of the visual scene (visual spatial attention) enables the prioritization of some aspects of information while ignoring others. Rapid manipulation of the attention field (i.e., the location and spread of visual spatial attention) is a critical aspect of human cognition, and previous research on spatial attention in individuals with autism spectrum disorders (ASD) has produced inconsistent results. In a series of three experiments, we evaluated claims in the literature that individuals with ASD exhibit a deficit in voluntarily controlling the deployment and size of the spatial attention field. We measured how well participants perform a visual discrimination task (accuracy) and how quickly they do so (reaction time), with and without spatial uncertainty (i.e., the lack of predictability concerning the spatial position of the upcoming stimulus). We found that high–functioning adults with autism exhibited slower reactions times overall with spatial uncertainty, but the effects of attention on performance accuracies and reaction times were indistinguishable between individuals with autism and typically developing individuals, in all three experiments. These results provide evidence of intact endogenous spatial attention function in high–functioning adults with ASD, suggesting that atypical endogenous spatial attention cannot be a latent characteristic of autism in general. Scientific Abstract Rapid manipulation of the attention field (i.e., the location and spread of visual spatial attention) is a critical aspect of human cognition, and previous research on spatial attention in individuals with autism spectrum disorders (ASD) has produced inconsistent results. In a series of three psychophysical experiments, we evaluated claims in the literature that individuals with ASD exhibit a deficit in voluntarily controlling the deployment and size of the spatial attention field. We measured the spatial distribution of performance accuracies and reaction times to quantify the sizes and locations of the attention field, with and without spatial uncertainty (i.e., the lack of predictability concerning the spatial position of the upcoming stimulus). We found that high–functioning adults with autism exhibited slower reactions times overall with spatial uncertainty, but the effects of attention on performance accuracies and reaction times were indistinguishable between individuals with autism and typically developing individuals, in all three experiments. These results provide evidence of intact endogenous spatial attention function in high–functioning adults with ASD, suggesting that atypical endogenous attention cannot be a latent characteristic of autism in general. PMID:23427075

  5. Virtual Reality as a Medium for Sensorimotor Adaptation Training and Spaceflight Countermeasures

    NASA Technical Reports Server (NTRS)

    Madansingh, S.; Bloomberg, J. J.

    2014-01-01

    Astronauts experience a profound sensorimotor adaptation during transition to and from the microgravity environment of space. With the upcoming shift to extra-long duration missions (upwards of 1 year) aboard the International Space Station, the immediate risks to astronauts during these transitory periods become more important than ever to understand and prepare for. Recent advances in virtual reality technology enable everyday adoption of these tools for entertainment and use in training. Embedding an individual in a virtual environment (VE) allows the ability to change the perception of visual flow, elicit automatic motor behavior and produce sensorimotor adaptation, not unlike those required during long duration microgravity exposure. The overall goal of this study is to determine the feasibility of present head mounted display technology (HMD) to produce reliable visual flow information and the expected adaptation associated with virtual environment manipulation to be used in future sensorimotor adaptability countermeasures. To further understand the influence of visual flow on gait adaptation during treadmill walking, a series of discordant visual flow manipulations in a virtual environment are proposed. Six healthy participants (3 male and 3 female) will observe visual flow information via HMD (Oculus Rift DK2) while walking on an instrumented treadmill at their preferred walking speed. Participants will be immersed in a series of VE's resembling infinite hallways with different visual characteristics: an office hallway, a hallway with pillars and the hallway of a fictional spacecraft. Participants will perform three trials of 10 min. each, which include walking on the treadmill while receiving congruent or incongruent visual information via the HMD. In the first trial, participants will experience congruent visual information (baseline) where the hallway is perceived to move at the same rate as their walking speed. The final two trials will be randomized among participants where the hallway is perceived to move at either half (0.5x) or twice (2.0x) their preferred walking speed. Participants will remain on the treadmill between trials and will not be warned of the upcoming change to visual flow to minimize preparatory adjustments. Stride length, step frequency and dual-support time will be quantified during each trial. We hypothesize that participants will experience a rapid modification in gait performance during periods of adaptive change, expressed as a decrease in step length, an increase in step frequency and an increase in dual-support time, followed by a period of adaptation where these movement parameters will return to near-baseline levels. As stride length, step frequency and dual support times return to baseline values, an adaptation time constant will be derived to establish individual time-to-adapt (TTA). HMD technology represents a paradigm shift in sensorimotor adaptation training where gait adaptability can be stressed using off-the-shelf consumer products and minimal experimental equipment, allowing for greater training flexibility in astronaut and terrestrial applications alike.

  6. Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils

    DTIC Science & Technology

    2015-03-01

    L R E P O R T DTRA-TR-14-80 Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils Approved for public... ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...dose absorbed) roentgen shake slug torr (mm Hg, 0 C) *The bacquerel (Bq) is the SI unit of radioactivity ; 1 Bq = 1 event/s. **The Gray (GY) is

  7. 32 CFR 811.8 - Forms prescribed and availability of publications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.8 Forms prescribed and availability of publications. (a) AF Form 833, Visual Information Request, AF Form 1340, Visual Information Support Center Workload Report, DD Form 1995, Visual Information (VI) Production...

  8. Visualizing the spinal neuronal dynamics of locomotion

    NASA Astrophysics Data System (ADS)

    Subramanian, Kalpathi R.; Bashor, D. P.; Miller, M. T.; Foster, J. A.

    2004-06-01

    Modern imaging and simulation techniques have enhanced system-level understanding of neural function. In this article, we present an application of interactive visualization to understanding neuronal dynamics causing locomotion of a single hip joint, based on pattern generator output of the spinal cord. Our earlier work visualized cell-level responses of multiple neuronal populations. However, the spatial relationships were abstract, making communication with colleagues difficult. We propose two approaches to overcome this: (1) building a 3D anatomical model of the spinal cord with neurons distributed inside, animated by the simulation and (2) adding limb movements predicted by neuronal activity. The new system was tested using a cat walking central pattern generator driving a pair of opposed spinal motoneuron pools. Output of opposing motoneuron pools was combined into a single metric, called "Net Neural Drive", which generated angular limb movement in proportion to its magnitude. Net neural drive constitutes a new description of limb movement control. The combination of spatial and temporal information in the visualizations elegantly conveys the neural activity of the output elements (motoneurons), as well as the resulting movement. The new system encompasses five biological levels of organization from ion channels to observed behavior. The system is easily scalable, and provides an efficient interactive platform for rapid hypothesis testing.

  9. Exploiting the User: Adapting Personas for Use in Security Visualization Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Jennifer C.; McColgin, David W.; Gregory, Michelle L.

    It has long been noted that visual representations of complex information can facilitate rapid understanding of data {citation], even with respect to ComSec applications {citation]. Recognizing that visualizations can increase usability in ComSec applications, [Zurko, Sasse] have argued that there is a need to create more usable security visualizations. (VisSec) However, usability of applications generally fall into the domain of Human Computer Interaction (HCI), which generally relies on heavy-weight user-centered design (UCD) processes. For example, the UCD process can involve many prototype iterations, or an ethnographic field study that can take months to complete. The problem is that VisSec projectsmore » generally do not have the resources to perform ethnographic field studies, or to employ complex UCD methods. They often are running on tight deadlines and budgets that can not afford standard UCD methods. In order to help resolve the conflict of needing more usable designs in ComSec, but not having the resources to employ complex UCD methods, in this paper we offer a stripped-down lighter weight version of a UCD process which can help with capturing user requirements. The approach we use is personas which a user requirements capturing method arising out of the Participatory Design philosophy [Grudin02].« less

  10. Proprioceptive feedback determines visuomotor gain in Drosophila

    PubMed Central

    Bartussek, Jan; Lehmann, Fritz-Olaf

    2016-01-01

    Multisensory integration is a prerequisite for effective locomotor control in most animals. Especially, the impressive aerial performance of insects relies on rapid and precise integration of multiple sensory modalities that provide feedback on different time scales. In flies, continuous visual signalling from the compound eyes is fused with phasic proprioceptive feedback to ensure precise neural activation of wing steering muscles (WSM) within narrow temporal phase bands of the stroke cycle. This phase-locked activation relies on mechanoreceptors distributed over wings and gyroscopic halteres. Here we investigate visual steering performance of tethered flying fruit flies with reduced haltere and wing feedback signalling. Using a flight simulator, we evaluated visual object fixation behaviour, optomotor altitude control and saccadic escape reflexes. The behavioural assays show an antagonistic effect of wing and haltere signalling on visuomotor gain during flight. Compared with controls, suppression of haltere feedback attenuates while suppression of wing feedback enhances the animal’s wing steering range. Our results suggest that the generation of motor commands owing to visual perception is dynamically controlled by proprioception. We outline a potential physiological mechanism based on the biomechanical properties of WSM and sensory integration processes at the level of motoneurons. Collectively, the findings contribute to our general understanding how moving animals integrate sensory information with dynamically changing temporal structure. PMID:26909184

  11. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  12. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  13. CuGene as a tool to view and explore genomic data

    NASA Astrophysics Data System (ADS)

    Haponiuk, Michał; Pawełkowicz, Magdalena; Przybecki, Zbigniew; Nowak, Robert M.

    2017-08-01

    Integrated CuGene is an easy-to-use, open-source, on-line tool that can be used to browse, analyze, and query genomic data and annotations. It places annotation tracks beneath genome coordinate positions, allowing rapid visual correlation of different types of information. It also allows users to upload and display their own experimental results or annotation sets. An important functionality of the application is a possibility to find similarity between sequences by applying four different algorithms of different accuracy. The presented tool was tested on real genomic data and is extensively used by Polish Consortium of Cucumber Genome Sequencing.

  14. Hypnagogic and hypnopompic hallucinations during amitriptyline treatment.

    PubMed

    Hemmingsen, R; Rafaelsen, O J

    1980-10-01

    Four cases of hypnagogic or hypnopompic visual hallucinations in patients during amitriptyline treatment are reported. The hallucinations were clearly delineated, projected to the outer objective space and were for a short time experienced as real. The patients rapidly realized the unreality of the "sights", probably because they regained the full criticism and coherent thinking of an unpsychotic awake individual. There may be a relation between the effects of amitriptyline in brain, the changed pattern of sleep and the clinical recovery. Patients should be informed about the benign character of this type of hallucinatory phenomena so that treatment is not terminated at an undue time.

  15. 32 CFR 811.3 - Official requests for visual information productions or materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE AIR FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.3 Official requests for visual information productions or materials. (a) Send official Air Force... 32 National Defense 6 2010-07-01 2010-07-01 false Official requests for visual information...

  16. 32 CFR 811.4 - Selling visual information materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.4 Selling visual information materials. (a) Air Force VI activities cannot sell materials. (b) HQ AFCIC/ITSM may approve the... 32 National Defense 6 2010-07-01 2010-07-01 false Selling visual information materials. 811.4...

  17. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  18. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow.

    PubMed

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-11-23

    Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from http://genoma.unab.cl/juice_system/ or http://www.genomavegetal.cl/juice_system/.

  19. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow

    PubMed Central

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-01-01

    Background Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. Results In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. Conclusion JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from or . PMID:17123449

  20. Kids, Take a Look at This! Visual Literacy Skills in the School Curriculum

    ERIC Educational Resources Information Center

    Vermeersch, Lode; Vandenbroucke, Anneloes

    2015-01-01

    Although the paradigm of visual literacy (VL) is rapidly emerging, the construct itself still lacks operational specificity. Based on a semiotic understanding of visual culture as an ongoing process of "making meaning", we present in this study a skill-based classification of VL, differentiating four sets of VL skills: perception;…

  1. Neural evidence reveals the rapid effects of reward history on selective attention.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-05-05

    Selective attention is often framed as being primarily driven by two factors: task-relevance and physical salience. However, factors like selection and reward history, which are neither currently task-relevant nor physically salient, can reliably and persistently influence visual selective attention. The current study investigated the nature of the persistent effects of irrelevant, physically non-salient, reward-associated features. These features affected one of the earliest reliable neural indicators of visual selective attention in humans, the P1 event-related potential, measured one week after the reward associations were learned. However, the effects of reward history were moderated by current task demands. The modulation of visually evoked activity supports the hypothesis that reward history influences the innate salience of reward associated features, such that even when no longer relevant, nor physically salient, these features have a rapid, persistent, and robust effect on early visual selective attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    PubMed

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  3. Neural mechanism for sensing fast motion in dim light.

    PubMed

    Li, Ran; Wang, Yi

    2013-11-07

    Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.

  4. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  5. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  6. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  7. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  8. Constructing and Reading Visual Information: Visual Literacy for Library and Information Science Education

    ERIC Educational Resources Information Center

    Ma, Yan

    2015-01-01

    This article examines visual literacy education and research for library and information science profession to educate the information professionals who will be able to execute and implement the ACRL (Association of College and Research Libraries) Visual Literacy Competency Standards successfully. It is a continuing call for inclusion of visual…

  9. Habitual wearers of colored lenses adapt more rapidly to the color changes the lenses produce.

    PubMed

    Engel, Stephen A; Wilkins, Arnold J; Mand, Shivraj; Helwig, Nathaniel E; Allen, Peter M

    2016-08-01

    The visual system continuously adapts to the environment, allowing it to perform optimally in a changing visual world. One large change occurs every time one takes off or puts on a pair of spectacles. It would be advantageous for the visual system to learn to adapt particularly rapidly to such large, commonly occurring events, but whether it can do so remains unknown. Here, we tested whether people who routinely wear spectacles with colored lenses increase how rapidly they adapt to the color shifts their lenses produce. Adaptation to a global color shift causes the appearance of a test color to change. We measured changes in the color that appeared "unique yellow", that is neither reddish nor greenish, as subjects donned and removed their spectacles. Nine habitual wearers and nine age-matched control subjects judged the color of a small monochromatic test light presented with a large, uniform, whitish surround every 5s. Red lenses shifted unique yellow to more reddish colors (longer wavelengths), and greenish lenses shifted it to more greenish colors (shorter wavelengths), consistent with adaptation "normalizing" the appearance of the world. In controls, the time course of this adaptation contained a large, rapid component and a smaller gradual one, in agreement with prior results. Critically, in habitual wearers the rapid component was significantly larger, and the gradual component significantly smaller than in controls. The total amount of adaptation was also larger in habitual wearers than in controls. These data suggest strongly that the visual system adapts with increasing rapidity and strength as environments are encountered repeatedly over time. An additional unexpected finding was that baseline unique yellow shifted in a direction opposite to that produced by the habitually worn lenses. Overall, our results represent one of the first formal reports that adjusting to putting on or taking off spectacles becomes easier over time, and may have important implications for clinical management. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Technology-based management of environmental organizations using an Environmental Management Information System (EMIS): Design and development

    NASA Astrophysics Data System (ADS)

    Kouziokas, Georgios N.

    2016-01-01

    The adoption of Information and Communication Technologies (ICT) in environmental management has become a significant demand nowadays with the rapid growth of environmental information. This paper presents a prototype Environmental Management Information System (EMIS) that was developed to provide a systematic way of managing environmental data and human resources of an environmental organization. The system was designed using programming languages, a Database Management System (DBMS) and other technologies and programming tools and combines information from the relational database in order to achieve the principal goals of the environmental organization. The developed application can be used to store and elaborate information regarding: human resources data, environmental projects, observations, reports, data about the protected species, environmental measurements of pollutant factors or other kinds of analytical measurements and also the financial data of the organization. Furthermore, the system supports the visualization of spatial data structures by using geographic information systems (GIS) and web mapping technologies. This paper describes this prototype software application, its structure, its functions and how this system can be utilized to facilitate technology-based environmental management and decision-making process.

  11. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  12. Oceans of Data : the Australian Ocean Data Network

    NASA Astrophysics Data System (ADS)

    Proctor, R.; Blain, P.; Mancini, S.

    2012-04-01

    The Australian Integrated Marine Observing System (IMOS, www.imos.org.au) is a research infrastructure project to establish an enduring marine observing system for Australian oceanic waters and shelf seas (in total, 4% of the world's oceans). Marine data and information are the main products and data management is therefore a central element to the project's success. A single integrative framework for data and information management has been developed which allows discovery and access of the data by scientists, managers and the public, based on standards and interoperability. All data is freely available. This information infrastructure has been further developed to form the Australian Ocean Data Network (AODN, www.aodn.org.au) which is rapidly becoming the 'one-stop-shop' for marine data in Australia. In response to requests from users, new features have recently been added to data discovery, visualization, and data access which move the AODN closer towards providing full integration of multi-disciplinary data.

  13. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  14. Face processing in different brain areas, and critical band masking.

    PubMed

    Rolls, Edmund T

    2008-09-01

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size, view, and spatial frequency of faces and objects, and that these neurons show rapid processing and rapid learning. Critical band spatial frequency masking is shown to be a property of these face-selective neurons and of the human visual perception of faces. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximizing the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalization, and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses generalize to other views of the same face. A theory is described of how such invariant representations may be produced by self-organizing learning in a hierarchically organized set of visual cortical areas with convergent connectivity. The theory utilizes either temporal or spatial continuity with an associative synaptic modification rule. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye-gaze, face view, and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the human orbitofrontal and pregenual cingulate cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.

  15. Harvest: an open platform for developing web-based biomedical data discovery and reporting applications.

    PubMed

    Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S

    2014-01-01

    Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu.

  16. Harvest: an open platform for developing web-based biomedical data discovery and reporting applications

    PubMed Central

    Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S

    2014-01-01

    Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu PMID:24131510

  17. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  18. Immediate effects of anticipatory coarticulation in spoken-word recognition

    PubMed Central

    Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179

  19. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli

    PubMed Central

    Brady, Timothy F.; Störmer, Viola S.; Alvarez, George A.

    2016-01-01

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge. PMID:27325767

  20. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli.

    PubMed

    Brady, Timothy F; Störmer, Viola S; Alvarez, George A

    2016-07-05

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.

  1. Methods for Automated Identification of Informative Behaviors in Natural Bioptic Driving

    PubMed Central

    Luo, Gang; Peli, Eli

    2012-01-01

    Visually impaired people may legally drive if wearing bioptic telescopes in some developed countries. To address the controversial safety issue of the practice, we have developed a low cost in-car recording system that can be installed in study participants’ own vehicles to record their daily driving activities. We also developed a set of automated identification techniques of informative behaviors to facilitate efficient manual review of important segments submerged in the vast amount of uncontrolled data. Here we present the methods and quantitative results of the detection performance for six types of driving maneuvers and behaviors that are important for bioptic driving: bioptic telescope use, turns, curves, intersections, weaving, and rapid stops. The testing data were collected from one normally sighted and two visually impaired subjects across multiple days. The detection rates ranged from 82% up to 100%, and the false discovery rates ranged from 0% to 13%. In addition, two human observers were able to interpret about 80% of targets viewed through the telescope. These results indicate that with appropriate data processing the low-cost system is able to provide reliable data for natural bioptic driving studies. PMID:22514200

  2. GenomeGems: evaluation of genetic variability from deep sequencing data

    PubMed Central

    2012-01-01

    Background Detection of disease-causing mutations using Deep Sequencing technologies possesses great challenges. In particular, organizing the great amount of sequences generated so that mutations, which might possibly be biologically relevant, are easily identified is a difficult task. Yet, for this assignment only limited automatic accessible tools exist. Findings We developed GenomeGems to gap this need by enabling the user to view and compare Single Nucleotide Polymorphisms (SNPs) from multiple datasets and to load the data onto the UCSC Genome Browser for an expanded and familiar visualization. As such, via automatic, clear and accessible presentation of processed Deep Sequencing data, our tool aims to facilitate ranking of genomic SNP calling. GenomeGems runs on a local Personal Computer (PC) and is freely available at http://www.tau.ac.il/~nshomron/GenomeGems. Conclusions GenomeGems enables researchers to identify potential disease-causing SNPs in an efficient manner. This enables rapid turnover of information and leads to further experimental SNP validation. The tool allows the user to compare and visualize SNPs from multiple experiments and to easily load SNP data onto the UCSC Genome browser for further detailed information. PMID:22748151

  3. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  4. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology

    PubMed Central

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-01-01

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information—such as historical overview, detailed description, and location—are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria—a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of the web-portal, we conclude that this novel approach is a very effective, fast, informative, and accurate way to present, disseminate, and document cultural heritage objects. PMID:28398230

  5. Spatial-frequency requirements for reading revisited

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.

    2012-01-01

    Blur is one of many visual factors that can limit reading in both normal and low vision. Legge et al. [Legge, G. E., Pelli, D. G., Rubin, G. S., & Schleske, M. M. (1985). Psychophysics of reading. I. Normal vision. Vision Research, 25, 239–252.] measured reading speed for text that was low-pass filtered with a range of cutoff spatial frequencies. Above 2 cycles per letter (CPL) reading speed was constant at its maximum level, but decreased rapidly for lower cutoff frequencies. It remains unknown why the critical cutoff for reading speed is near 2 CPL. The goal of the current study was to ask whether the spatial-frequency requirement for rapid reading is related to the effects of cutoff frequency on letter recognition and the size of the visual span. Visual span profiles were measured by asking subjects to recognize letters in trigrams (random strings of three letters) flashed for 150 ms at varying letter positions left and right of the fixation point. Reading speed was measured with Rapid Serial Visual Presentation (RSVP). The size of the visual span and reading speed were measured for low-pass filtered stimuli with cutoff frequencies from 0.8 to 8 CPL. Low-pass letter recognition data, obtained under similar testing conditions, were available from our previous study (Kwon & Legge, 2011). We found that the spatial-frequency requirement for reading is very similar to the spatial-frequency requirements for the size of the visual span and single letter recognition. The critical cutoff frequencies for reading speed, the size of the visual span and a contrast-invariant measure of letter recognition were all near 1.4 CPL, which is lower than the previous estimate of 2 CPL for reading speed. Although correlational in nature, these results are consistent with the hypothesis that the size of the visual span is closely linked to reading speed. PMID:22521659

  6. WHIDE—a web tool for visual data mining colocation patterns in multivariate bioimages

    PubMed Central

    Kölling, Jan; Langenkämper, Daniel; Abouna, Sylvie; Khan, Michael; Nattkemper, Tim W.

    2012-01-01

    Motivation: Bioimaging techniques rapidly develop toward higher resolution and dimension. The increase in dimension is achieved by different techniques such as multitag fluorescence imaging, Matrix Assisted Laser Desorption / Ionization (MALDI) imaging or Raman imaging, which record for each pixel an N-dimensional intensity array, representing local abundances of molecules, residues or interaction patterns. The analysis of such multivariate bioimages (MBIs) calls for new approaches to support users in the analysis of both feature domains: space (i.e. sample morphology) and molecular colocation or interaction. In this article, we present our approach WHIDE (Web-based Hyperbolic Image Data Explorer) that combines principles from computational learning, dimension reduction and visualization in a free web application. Results: We applied WHIDE to a set of MBI recorded using the multitag fluorescence imaging Toponome Imaging System. The MBI show field of view in tissue sections from a colon cancer study and we compare tissue from normal/healthy colon with tissue classified as tumor. Our results show, that WHIDE efficiently reduces the complexity of the data by mapping each of the pixels to a cluster, referred to as Molecular Co-Expression Phenotypes and provides a structural basis for a sophisticated multimodal visualization, which combines topology preserving pseudocoloring with information visualization. The wide range of WHIDE's applicability is demonstrated with examples from toponome imaging, high content screens and MALDI imaging (shown in the Supplementary Material). Availability and implementation: The WHIDE tool can be accessed via the BioIMAX website http://ani.cebitec.uni-bielefeld.de/BioIMAX/; Login: whidetestuser; Password: whidetest. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: tim.nattkemper@uni-bielefeld.de PMID:22390938

  7. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    PubMed Central

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  8. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  9. 32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....

  10. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  11. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  12. Effects of Length of Retention Interval on Proactive Interference in Short-Term Visual Memory

    ERIC Educational Resources Information Center

    Meudell, Peter R.

    1977-01-01

    These experiments show two things: (a) In visual memory, long-term interference on a current item from items previously stored only seems to occur when the current item's retention interval is relatively long, and (b) the visual code appears to decay rapidly, reaching asymptote within 3 seconds of input in the presence of an interpolated task.…

  13. SimITK: visual programming of the ITK image-processing library within Simulink.

    PubMed

    Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin

    2014-04-01

    The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.

  14. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  16. Rapid prototyping of soil moisture estimates using the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.

    2007-12-01

    The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.

  17. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  18. A comparison of the central nervous system effects of haloperidol, chlorpromazine and sulpiride in normal volunteers.

    PubMed Central

    McClelland, G R; Cooper, S M; Pilgrim, A J

    1990-01-01

    1. Twelve healthy male volunteers participated in four experimental occasions during each of which they were dosed with one of the following anti-psychotic drugs: chlorpromazine (50 mg), haloperidol (3 mg), sulpiride (400 mg) and placebo. Drugs were allocated to subjects in a double-blind, crossover fashion. 2. The subject's mood state, psychometric performance and electroencephalogram (EEG) were assessed pre-dose, and at 2, 4, 6, 8, 24 and 48 h post-dose. Mood states were assessed using 16 visual analogue scales and psychomotor performance was measured using the following tests: elapsed time estimation, tapping rate, choice reaction times, a rapid information processing task, flash fusion threshold, a manipulative motor task, digit span, body sway and tremor. 3. Chlorpromazine and haloperidol significantly reduced subjective ratings of 'alertness' and 'contentedness', and haloperidol significantly reduced feelings of 'calmness'. Sulpiride did not significantly affect any of the visual analogue scales. 4. All three anti-psychotic drugs had similar EEG effects with peak effect 2 to 4 h postdose. The profile was characterised by an increase in the proportion of slow wave activity (delta and theta) as well as decreased alpha (8-14 Hz) and faster (beta) wave activity. 5. Chlorpromazine reduced tapping rate and increased choice reaction movement times. Haloperidol reduced the flash fusion threshold frequency at 6 h post-dose. Sulpiride prolonged the duration of the manipulative motor task, particularly at 48 h post-dose. 6. All three anti-psychotic drugs impaired performance on the rapid information processing task. Chlorpromazine significantly reduced the number of correct letter pair identifications at 2, 4 and 6 h post-dose, haloperidol at 4, 6, 8, 24 and 48 h post-dose, and sulpiride at 24 h post-dose.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2288826

  19. Functional Activation during the Rapid Visual Information Processing Task in a Middle Aged Cohort: An fMRI Study.

    PubMed

    Neale, Chris; Johnston, Patrick; Hughes, Matthew; Scholey, Andrew

    2015-01-01

    The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

  20. The iMeteo is a web-based weather visualization tool

    NASA Astrophysics Data System (ADS)

    Tuni San-Martín, Max; San-Martín, Daniel; Cofiño, Antonio S.

    2010-05-01

    iMeteo is a web-based weather visualization tool. Designed with an extensible J2EE architecture, it is capable of displaying information from heterogeneous data sources such as gridded data from numerical models (in NetCDF format) or databases of local predictions. All this information is presented in a user-friendly way, being able to choose the specific tool to display data (maps, graphs, information tables) and customize it to desired locations. *Modular Display System* Visualization of the data is achieved through a set of mini tools called widgets. A user can add them at will and arrange them around the screen easily with a drag and drop movement. They can be of various types and each can be configured separately, forming a really powerful and configurable system. The "Map" is the most complex widget, since it can show several variables simultaneously (either gridded or point-based) through a layered display. Other useful widgets are the the "Histogram", which generates a graph with the frequency characteristics of a variable and the "Timeline" which shows the time evolution of a variable at a given location in an interactive way. *Customization and security* Following the trends in web development, the user can easily customize the way data is displayed. Due to programming in client side with technologies like AJAX, the interaction with the application is similar to the desktop ones because there are rapid respone times. If a user is registered then he could also save his settings in the database, allowing access from any system with Internet access with his particular setup. There is particular emphasis on application security. The administrator can define a set of user profiles, which may have associated restrictions on access to certain data sources, geographic areas or time intervals.

Top