Sample records for represent visual input

  1. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  2. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  3. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  5. Modeling and Analysis of Information Product Maps

    ERIC Educational Resources Information Center

    Heien, Christopher Harris

    2012-01-01

    Information Product Maps are visual diagrams used to represent the inputs, processing, and outputs of data within an Information Manufacturing System. A data unit, drawn as an edge, symbolizes a grouping of raw data as it travels through this system. Processes, drawn as vertices, transform each data unit input into various forms prior to delivery…

  6. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  7. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  8. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  9. A quantitative comparison of the hemispheric, areal, and laminar origins of sensory and motor cortical projections to the superior colliculus of the cat.

    PubMed

    Butler, Blake E; Chabot, Nicole; Lomber, Stephen G

    2016-09-01

    The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  11. Link between orientation and retinotopic maps in primary visual cortex

    PubMed Central

    Paik, Se-Bum; Ringach, Dario L.

    2012-01-01

    Maps representing the preference of neurons for the location and orientation of a stimulus on the visual field are a hallmark of primary visual cortex. It is not yet known how these maps develop and what function they play in visual processing. One hypothesis postulates that orientation maps are initially seeded by the spatial interference of ON- and OFF-center retinal receptive field mosaics. Here we show that such a mechanism predicts a link between the layout of orientation preferences around singularities of different signs and the cardinal axes of the retinotopic map. Moreover, we confirm the predicted relationship holds in tree shrew primary visual cortex. These findings provide additional support for the notion that spatially structured input from the retina may provide a blueprint for the early development of cortical maps and receptive fields. More broadly, it raises the possibility that spatially structured input from the periphery may shape the organization of primary sensory cortex of other modalities as well. PMID:22509015

  12. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  13. The Maryland Large-Scale Integrated Neurocognitive Architecture

    DTIC Science & Technology

    2008-03-01

    Visual input enters the network through the lateral geniculate nucleus (LGN) and is passed forward through visual brain regions (V1, V2, and V4...University of Maryland Sponsored by Defense Advanced Research Projects Agency DARPA Order No. V029 APPROVED FOR PUBLIC RELEASE...interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S

  14. Absence of Visual Input Results in the Disruption of Grid Cell Firing in the Mouse.

    PubMed

    Chen, Guifen; Manson, Daniel; Cacucci, Francesca; Wills, Thomas Joseph

    2016-09-12

    Grid cells are spatially modulated neurons within the medial entorhinal cortex whose firing fields are arranged at the vertices of tessellating equilateral triangles [1]. The exquisite periodicity of their firing has led to the suggestion that they represent a path integration signal, tracking the organism's position by integrating speed and direction of movement [2-10]. External sensory inputs are required to reset any errors that the path integrator would inevitably accumulate. Here we probe the nature of the external sensory inputs required to sustain grid firing, by recording grid cells as mice explore familiar environments in complete darkness. The absence of visual cues results in a significant disruption of grid cell firing patterns, even when the quality of the directional information provided by head direction cells is largely preserved. Darkness alters the expression of velocity signaling within the entorhinal cortex, with changes evident in grid cell firing rate and the local field potential theta frequency. Short-term (<1.5 s) spike timing relationships between grid cell pairs are preserved in the dark, indicating that network patterns of excitatory and inhibitory coupling between grid cells exist independently of visual input and of spatially periodic firing. However, we find no evidence of preserved hexagonal symmetry in the spatial firing of single grid cells at comparable short timescales. Taken together, these results demonstrate that visual input is required to sustain grid cell periodicity and stability in mice and suggest that grid cells in mice cannot perform accurate path integration in the absence of reliable visual cues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Visual Memories Bypass Normalization.

    PubMed

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  16. Visual Memories Bypass Normalization

    PubMed Central

    Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam

    2018-01-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038

  17. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  18. Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe

    PubMed Central

    Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan

    2006-01-01

    The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158

  19. Learning the Gestalt rule of collinearity from object motion.

    PubMed

    Prodöhl, Carsten; Würtz, Rolf P; von der Malsburg, Christoph

    2003-08-01

    The Gestalt principle of collinearity (and curvilinearity) is widely regarded as being mediated by the long-range connection structure in primary visual cortex. We review the neurophysiological and psychophysical literature to argue that these connections are developed from visual experience after birth, relying on coherent object motion. We then present a neural network model that learns these connections in an unsupervised Hebbian fashion with input from real camera sequences. The model uses spatiotemporal retinal filtering, which is very sensitive to changes in the visual input. We show that it is crucial for successful learning to use the correlation of the transient responses instead of the sustained ones. As a consequence, learning works best with video sequences of moving objects. The model addresses a special case of the fundamental question of what represents the necessary a priori knowledge the brain is equipped with at birth so that the self-organized process of structuring by experience can be successful.

  20. Attention, Intention, and Priority in the Parietal Lobe

    PubMed Central

    Bisley, James W.; Goldberg, Michael E.

    2013-01-01

    For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention. PMID:20192813

  1. ON THE PERCEPTION OF PROBABLE THINGS

    PubMed Central

    Albright, Thomas D.

    2012-01-01

    SUMMARY Perception is influenced both by the immediate pattern of sensory inputs and by memories acquired through prior experiences with the world. Throughout much of its illustrious history, however, study of the cellular basis of perception has focused on neuronal structures and events that underlie the detection and discrimination of sensory stimuli. Relatively little attention has been paid to the means by which memories interact with incoming sensory signals. Building upon recent neurophysiological/behavioral studies of the cortical substrates of visual associative memory, I propose a specific functional process by which stored information about the world supplements sensory inputs to yield neuronal signals that can account for visual perceptual experience. This perspective represents a significant shift in the way we think about the cellular bases of perception. PMID:22542178

  2. Adult Visual Cortical Plasticity

    PubMed Central

    Gilbert, Charles D.; Li, Wu

    2012-01-01

    The visual cortex has the capacity for experience dependent change, or cortical plasticity, that is retained throughout life. Plasticity is invoked for encoding information during perceptual learning, by internally representing the regularities of the visual environment, which is useful for facilitating intermediate level vision - contour integration and surface segmentation. The same mechanisms have adaptive value for functional recovery after CNS damage, such as that associated with stroke or neurodegenerative disease. A common feature to plasticity in primary visual cortex (V1) is an association field that links contour elements across the visual field. The circuitry underlying the association field includes a plexus of long range horizontal connections formed by cortical pyramidal cells. These connections undergo rapid and exuberant sprouting and pruning in response to removal of sensory input, which can account for the topographic reorganization following retinal lesions. Similar alterations in cortical circuitry may be involved in perceptual learning, and the changes observed in V1 may be representative of how learned information is encoded throughout the cerebral cortex. PMID:22841310

  3. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells

    PubMed Central

    Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam

    2013-01-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465

  4. A Connectionist Simulation of Attention and Vector Comparison: The Need for Serial Processing in Parallel Hardware

    DTIC Science & Technology

    1991-01-01

    visual and three-layer connectionist network, in that the input layer of memory processing is serial, and is likely to represent each module is... Selective attention gates visual University Press. processing in the extrastnate cortex. Science, 229:782-784. Treasman, A.M. (1985). Preartentive...AD-A242 225 A CONNECTIONIST SIMULATION OF ATTENTION AND VECTOR COMPARISON: THE NEED FOR SERIAL PROCESSING IN PARALLEL HARDWARE Technical Report AlP

  5. Visual Receptive Field Heterogeneity and Functional Connectivity of Adjacent Neurons in Primate Frontoparietal Association Cortices.

    PubMed

    Viswanathan, Pooja; Nieder, Andreas

    2017-09-13

    The basic organization principles of the primary visual cortex (V1) are commonly assumed to also hold in the association cortex such that neurons within a cortical column share functional connectivity patterns and represent the same region of the visual field. We mapped the visual receptive fields (RFs) of neurons recorded at the same electrode in the ventral intraparietal area (VIP) and the lateral prefrontal cortex (PFC) of rhesus monkeys. We report that the spatial characteristics of visual RFs between adjacent neurons differed considerably, with increasing heterogeneity from VIP to PFC. In addition to RF incongruences, we found differential functional connectivity between putative inhibitory interneurons and pyramidal cells in PFC and VIP. These findings suggest that local RF topography vanishes with hierarchical distance from visual cortical input and argue for increasingly modified functional microcircuits in noncanonical association cortices that contrast V1. SIGNIFICANCE STATEMENT Our visual field is thought to be represented faithfully by the early visual brain areas; all the information from a certain region of the visual field is conveyed to neurons situated close together within a functionally defined cortical column. We examined this principle in the association areas, PFC, and ventral intraparietal area of rhesus monkeys and found that adjacent neurons represent markedly different areas of the visual field. This is the first demonstration of such noncanonical organization of these brain areas. Copyright © 2017 the authors 0270-6474/17/378919-10$15.00/0.

  6. Learning spatially coherent properties of the visual world in connectionist networks

    NASA Astrophysics Data System (ADS)

    Becker, Suzanna; Hinton, Geoffrey E.

    1991-10-01

    In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.

  7. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  8. The case from animal studies for balanced binocular treatment strategies for human amblyopia.

    PubMed

    Mitchell, Donald E; Duffy, Kevin R

    2014-03-01

    Although amblyopia typically manifests itself as a monocular condition, its origin has long been linked to unbalanced neural signals from the two eyes during early postnatal development, a view confirmed by studies conducted on animal models in the last 50 years. Despite recognition of its binocular origin, treatment of amblyopia continues to be dominated by a period of patching of the non-amblyopic eye that necessarily hinders binocular co-operation. This review summarizes evidence from three lines of investigation conducted on an animal model of deprivation amblyopia to support the thesis that treatment of amblyopia should instead focus upon procedures that promote and enhance binocular co-operation. First, experiments with mixed daily visual experience in which episodes of abnormal visual input were pitted against normal binocular exposure revealed that short exposures of the latter offset much longer periods of abnormal input to allow normal development of visual acuity in both eyes. Second, experiments on the use of part-time patching revealed that purposeful introduction of episodes of binocular vision each day could be very beneficial. Periods of binocular exposure that represented 30-50% of the daily visual exposure included with daily occlusion of the non-amblyopic could allow recovery of normal vision in the amblyopic eye. Third, very recent experiments demonstrate that a short 10 day period of total darkness can promote very fast and complete recovery of visual acuity in the amblyopic eye of kittens and may represent an example of a class of artificial environments that have similar beneficial effects. Finally, an approach is described to allow timing of events in kitten and human visual system development to be scaled to optimize the ages for therapeutic interventions. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  9. 15 CFR 784.3 - Scope and conduct of complementary access.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... presence of a U.S. Government Host Team. No information of direct national security significance shall be... location accessed, the IAEA Team may: (i) Perform visual observation of parts or areas of the location; (ii... unless the Host Team leader, after receiving input from representatives of the location and consulting...

  10. 15 CFR 784.3 - Scope and conduct of complementary access.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... presence of a U.S. Government Host Team. No information of direct national security significance shall be... location accessed, the IAEA Team may: (i) Perform visual observation of parts or areas of the location; (ii... unless the Host Team leader, after receiving input from representatives of the location and consulting...

  11. 15 CFR 784.3 - Scope and conduct of complementary access.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... presence of a U.S. Government Host Team. No information of direct national security significance shall be... location accessed, the IAEA Team may: (i) Perform visual observation of parts or areas of the location; (ii... unless the Host Team leader, after receiving input from representatives of the location and consulting...

  12. A Picture Is Worth a Thousand Words: Applying Image-Based Learning to Course Design

    ERIC Educational Resources Information Center

    Whitley, Cameron T.

    2013-01-01

    Although images are often used in the classroom to communicate difficult concepts, students have little input into their selection and application. This approach can create a passive experience for students and represents a missed opportunity for instructors to engage participation. By applying concepts found in visual sociology to techniques…

  13. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  14. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  15. The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.

    PubMed

    Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon

    2018-07-01

    We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Neural organization and visual processing in the anterior optic tubercle of the honeybee brain.

    PubMed

    Mota, Theo; Yamagata, Nobuhiro; Giurfa, Martin; Gronenberg, Wulfila; Sandoz, Jean-Christophe

    2011-08-10

    The honeybee Apis mellifera represents a valuable model for studying the neural segregation and integration of visual information. Vision in honeybees has been extensively studied at the behavioral level and, to a lesser degree, at the physiological level using intracellular electrophysiological recordings of single neurons. However, our knowledge of visual processing in honeybees is still limited by the lack of functional studies of visual processing at the circuit level. Here we contribute to filling this gap by providing a neuroanatomical and neurophysiological characterization at the circuit level of a practically unstudied visual area of the bee brain, the anterior optic tubercle (AOTu). First, we analyzed the internal organization and neuronal connections of the AOTu. Second, we established a novel protocol for performing optophysiological recordings of visual circuit activity in the honeybee brain and studied the responses of AOTu interneurons during stimulation of distinct eye regions. Our neuroanatomical data show an intricate compartmentalization and connectivity of the AOTu, revealing a dorsoventral segregation of the visual input to the AOTu. Light stimuli presented in different parts of the visual field (dorsal, lateral, or ventral) induce distinct patterns of activation in AOTu output interneurons, retaining to some extent the dorsoventral input segregation revealed by our neuroanatomical data. In particular, activity patterns evoked by dorsal and ventral eye stimulation are clearly segregated into distinct AOTu subunits. Our results therefore suggest an involvement of the AOTu in the processing of dorsoventrally segregated visual information in the honeybee brain.

  17. Aging and Visual Attention

    PubMed Central

    Madden, David J.

    2007-01-01

    Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001

  18. Flexible Coding of Visual Working Memory Representations during Distraction.

    PubMed

    Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark

    2018-06-06

    Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.

  19. Sensory and Postural Input in the Occurrence of a Gender Difference in Orienting Liquid Surfaces

    ERIC Educational Resources Information Center

    Robert, Michele; Longpre, Sophie

    2005-01-01

    In the water-level task, both spatial skill and physical knowledge contribute to representing the surface of a liquid as horizontal irrespective of the container's tilt. Under the standard visual format of the task, men systematically surpass women at drawing correct water lines in outlines of tilted containers. The present exploratory experiments…

  20. Bottom-up and Top-down Input Augment the Variability of Cortical Neurons

    PubMed Central

    Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.

    2016-01-01

    SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459

  1. Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2015-01-01

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341

  2. UPIOM: a new tool of MFA and its application to the flow of iron and steel associated with car production.

    PubMed

    Nakamura, Shinichiro; Kondo, Yasushi; Matsubae, Kazuyo; Nakajima, Kenichi; Nagasaka, Tetsuya

    2011-02-01

    Identification of the flow of materials and substances associated with a product system provides useful information for Life Cycle Analysis (LCA), and contributes to extending the scope of complementarity between LCA and Materials Flow Analysis/Substances Flow Analysis (MFA/SFA), the two major tools of industrial ecology. This paper proposes a new methodology based on input-output analysis for identifying the physical input-output flow of individual materials that is associated with the production of a unit of given product, the unit physical input-output by materials (UPIOM). While the Sankey diagram has been a standard tool for the visualization of MFA/SFA, with an increase in the complexity of the flows under consideration, which will be the case when economy-wide intersectoral flows of materials are involved, the Sankey diagram may become too complex for effective visualization. An alternative way to visually represent material flows is proposed which makes use of triangulation of the flow matrix based on degrees of fabrication. The proposed methodology is applied to the flow of pig iron and iron and steel scrap that are associated with the production of a passenger car in Japan. Its usefulness to identify a specific MFA pattern from the original IO table is demonstrated.

  3. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  4. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  5. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  6. Supranormal orientation selectivity of visual neurons in orientation-restricted animals.

    PubMed

    Sasaki, Kota S; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-11-16

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure.

  7. Supranormal orientation selectivity of visual neurons in orientation-restricted animals

    PubMed Central

    Sasaki, Kota S.; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C.; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M.; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-01-01

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure. PMID:26567927

  8. The differential effects of acute right- vs. left-sided vestibular failure on brain metabolism.

    PubMed

    Becker-Bense, Sandra; Dieterich, Marianne; Buchholz, Hans-Georg; Bartenstein, Peter; Schreckenberger, Mathias; Brandt, Thomas

    2014-07-01

    The human vestibular system is represented in the brain bilaterally, but it has functional asymmetries, i.e., a dominance of ipsilateral pathways and of the right hemisphere in right-handers. To determine if acute right- or left-sided unilateral vestibular neuritis (VN) is associated with differential patterns of brain metabolism in areas representing the vestibular network and the visual-vestibular interaction, patients with acute VN (right n = 9; left n = 13) underwent resting state (18)F-FDG PET once in the acute phase and once 3 months later after central vestibular compensation. The contrast acute vs. chronic phase showed signal differences in contralateral vestibular areas and the inverse contrast in visual cortex areas, both more pronounced in VN right. In VN left additional regions were found in the cerebellar hemispheres and vermis bilaterally, accentuated in severe cases. In general, signal changes appeared more pronounced in patients with more severe vestibular deficits. Acute phase PET data of patients compared to that of age-matched healthy controls disclosed similarities to these patterns, thus permitting the interpretation that the signal changes in vestibular temporo-parietal areas reflect signal increases, and in visual areas, signal decreases. These data imply that brain activity in the acute phase of right- and left-sided VN exhibits different compensatory patterns, i.e., the dominant ascending input is shifted from the ipsilateral to the contralateral pathways, presumably due to the missing ipsilateral vestibular input. The visual-vestibular interaction patterns were preserved, but were of different prominence in each hemisphere and more pronounced in patients with right-sided failure and more severe vestibular deficits.

  9. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  10. Resting-State Retinotopic Organization in the Absence of Retinal Input and Visual Experience

    PubMed Central

    Binda, Paola; Benson, Noah C.; Bridge, Holly; Watkins, Kate E.

    2015-01-01

    Early visual areas have neuronal receptive fields that form a sampling mosaic of visual space, resulting in a series of retinotopic maps in which the same region of space is represented in multiple visual areas. It is not clear to what extent the development and maintenance of this retinotopic organization in humans depend on retinal waves and/or visual experience. We examined the corticocortical receptive field organization of resting-state BOLD data in normally sighted, early blind, and anophthalmic (in which both eyes fail to develop) individuals and found that resting-state correlations between V1 and V2/V3 were retinotopically organized for all subject groups. These results show that the gross retinotopic pattern of resting-state connectivity across V1-V3 requires neither retinal waves nor visual experience to develop and persist into adulthood. SIGNIFICANCE STATEMENT Evidence from resting-state BOLD data suggests that the connections between early visual areas develop and are maintained even in the absence of retinal waves and visual experience. PMID:26354906

  11. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  12. Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Welch, Robert B.

    1994-01-01

    Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.

  13. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  14. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  15. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less

  16. Grid Visualization Tool

    NASA Technical Reports Server (NTRS)

    Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven

    2005-01-01

    The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.

  17. Frequency-band signatures of visual responses to naturalistic input in ferret primary visual cortex during free viewing.

    PubMed

    Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio

    2015-02-19

    Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  19. Visual gravitational motion and the vestibular system in humans

    PubMed Central

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-01-01

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761

  20. Visual gravitational motion and the vestibular system in humans.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  1. Virtual endoscopy using spherical QuickTime-VR panorama views.

    PubMed

    Tiede, Ulf; von Sternberg-Gospos, Norman; Steiner, Paul; Höhne, Karl Heinz

    2002-01-01

    Virtual endoscopy needs some precomputation of the data (segmentation, path finding) before the diagnostic process can take place. We propose a method that precomputes multinode spherical panorama movies using Quick-Time VR. This technique allows almost the same navigation and visualization capabilities as a real endoscopic procedure, a significant reduction of interaction input is achieved and the movie represents a document of the procedure.

  2. LSD alters eyes-closed functional connectivity within the early visual cortex in a retinotopic fashion.

    PubMed

    Roseman, Leor; Sereno, Martin I; Leech, Robert; Kaelen, Mendel; Orban, Csaba; McGonigle, John; Feilding, Amanda; Nutt, David J; Carhart-Harris, Robin L

    2016-08-01

    The question of how spatially organized activity in the visual cortex behaves during eyes-closed, lysergic acid diethylamide (LSD)-induced "psychedelic imagery" (e.g., visions of geometric patterns and more complex phenomena) has never been empirically addressed, although it has been proposed that under psychedelics, with eyes-closed, the brain may function "as if" there is visual input when there is none. In this work, resting-state functional connectivity (RSFC) data was analyzed from 10 healthy subjects under the influence of LSD and, separately, placebo. It was suspected that eyes-closed psychedelic imagery might involve transient local retinotopic activation, of the sort typically associated with visual stimulation. To test this, it was hypothesized that, under LSD, patches of the visual cortex with congruent retinotopic representations would show greater RSFC than incongruent patches. Using a retinotopic localizer performed during a nondrug baseline condition, nonadjacent patches of V1 and V3 that represent the vertical or the horizontal meridians of the visual field were identified. Subsequently, RSFC between V1 and V3 was measured with respect to these a priori identified patches. Consistent with our prior hypothesis, the difference between RSFC of patches with congruent retinotopic specificity (horizontal-horizontal and vertical-vertical) and those with incongruent specificity (horizontal-vertical and vertical-horizontal) increased significantly under LSD relative to placebo, suggesting that activity within the visual cortex becomes more dependent on its intrinsic retinotopic organization in the drug condition. This result may indicate that under LSD, with eyes-closed, the early visual system behaves as if it were seeing spatially localized visual inputs. Hum Brain Mapp 37:3031-3040, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    PubMed

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  4. Effect of Power Point Enhanced Teaching (Visual Input) on Iranian Intermediate EFL Learners' Listening Comprehension Ability

    ERIC Educational Resources Information Center

    Sehati, Samira; Khodabandehlou, Morteza

    2017-01-01

    The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…

  5. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions.

    PubMed

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk

    2008-07-15

    The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.

  6. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions

    PubMed Central

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk

    2008-01-01

    The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493

  7. Acousto-Optical Method of Encoding and Visualization of Underwater Space

    DTIC Science & Technology

    2014-01-27

    neurons which are mathematically described as coupled nonlinear oscillators that are slightly unstable. They have a property called ’ Self - Referential ... self - regulating process which is represented by Equation (5) in the ensuing description. [0083] The input/output circuitry 64 outputs signals that...other words, self -correcting dynamics of the Na and Ca ions in the membranes are closely related to the sensing and the flopping of motion actuators

  8. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  9. Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners

    ERIC Educational Resources Information Center

    Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki

    2017-01-01

    Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…

  10. Graphing trillions of triangles.

    PubMed

    Burkhardt, Paul

    2017-07-01

    The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed.

  11. Sensory experience modifies feature map relationships in visual cortex

    PubMed Central

    Cloherty, Shaun L; Hughes, Nicholas J; Hietanen, Markus A; Bhagavatula, Partha S

    2016-01-01

    The extent to which brain structure is influenced by sensory input during development is a critical but controversial question. A paradigmatic system for studying this is the mammalian visual cortex. Maps of orientation preference (OP) and ocular dominance (OD) in the primary visual cortex of ferrets, cats and monkeys can be individually changed by altered visual input. However, the spatial relationship between OP and OD maps has appeared immutable. Using a computational model we predicted that biasing the visual input to orthogonal orientation in the two eyes should cause a shift of OP pinwheels towards the border of OD columns. We then confirmed this prediction by rearing cats wearing orthogonally oriented cylindrical lenses over each eye. Thus, the spatial relationship between OP and OD maps can be modified by visual experience, revealing a previously unknown degree of brain plasticity in response to sensory input. DOI: http://dx.doi.org/10.7554/eLife.13911.001 PMID:27310531

  12. Visual Occlusion Decreases Motion Sickness in a Flight Simulator.

    PubMed

    Ishak, Shaziela; Bubka, Andrea; Bonato, Frederick

    2018-05-01

    Sensory conflict theories of motion sickness (MS) assert that symptoms may result when incoming sensory inputs (e.g., visual and vestibular) contradict each other. Logic suggests that attenuating input from one sense may reduce conflict and hence lessen MS symptoms. In the current study, it was hypothesized that attenuating visual input by blocking light entering the eye would reduce MS symptoms in a motion provocative environment. Participants sat inside an aircraft cockpit mounted onto a motion platform that simultaneously pitched, rolled, and heaved in two conditions. In the occluded condition, participants wore "blackout" goggles and closed their eyes to block light. In the control condition, participants opened their eyes and had full view of the cockpit's interior. Participants completed separate Simulator Sickness Questionnaires before and after each condition. The posttreatment total Simulator Sickness Questionnaires and subscores for nausea, oculomotor, and disorientation in the control condition were significantly higher than those in the occluded condition. These results suggest that under some conditions attenuating visual input may delay the onset of MS or weaken the severity of symptoms. Eliminating visual input may reduce visual/nonvisual sensory conflict by weakening the influence of the visual channel, which is consistent with the sensory conflict theory of MS.

  13. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind.

    PubMed

    Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir

    2012-11-08

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Stream-related preferences of inputs to the superior colliculus from areas of dorsal and ventral streams of mouse visual cortex.

    PubMed

    Wang, Quanxin; Burkhalter, Andreas

    2013-01-23

    Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.

  15. Load-Dependent Increases in Delay-Period Alpha-Band Power Track the Gating of Task-Irrelevant Inputs to Working Memory.

    PubMed

    Heinz, Andrew J; Johnson, Jeffrey S

    2017-01-01

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band power (ABP) during the delay period of verbal and visual working memory (VWM) tasks. There have been various proposals regarding the functional significance of such increases, including the inhibition of task-irrelevant cortical areas as well as the active retention of information in VWM. The present study examines the role of delay-period ABP in mediating the effects of interference arising from on-going visual processing during a concurrent VWM task. Specifically, we reasoned that, if set-size dependent increases in ABP represent the gating out of on-going task-irrelevant visual inputs, they should be predictive with respect to some modulation in visual evoked potentials resulting from a task-irrelevant delay period probe stimulus. In order to investigate this possibility, we recorded the electroencephalogram while subjects performed a change detection task requiring the retention of two or four novel shapes. On a portion of trials, a novel, task-irrelevant bilateral checkerboard probe was presented mid-way through the delay. Analyses focused on examining correlations between set-size dependent increases in ABP and changes in the magnitude of the P1, N1 and P3a components of the probe-evoked response and how such increases might be related to behavior. Results revealed that increased delay-period ABP was associated with changes in the amplitude of the N1 and P3a event-related potential (ERP) components, and with load-dependent changes in capacity when the probe was presented during the delay. We conclude that load-dependent increases in ABP likely play a role in supporting short-term retention by gating task-irrelevant sensory inputs and suppressing potential sources of disruptive interference.

  16. Load-Dependent Increases in Delay-Period Alpha-Band Power Track the Gating of Task-Irrelevant Inputs to Working Memory

    PubMed Central

    Heinz, Andrew J.; Johnson, Jeffrey S.

    2017-01-01

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band power (ABP) during the delay period of verbal and visual working memory (VWM) tasks. There have been various proposals regarding the functional significance of such increases, including the inhibition of task-irrelevant cortical areas as well as the active retention of information in VWM. The present study examines the role of delay-period ABP in mediating the effects of interference arising from on-going visual processing during a concurrent VWM task. Specifically, we reasoned that, if set-size dependent increases in ABP represent the gating out of on-going task-irrelevant visual inputs, they should be predictive with respect to some modulation in visual evoked potentials resulting from a task-irrelevant delay period probe stimulus. In order to investigate this possibility, we recorded the electroencephalogram while subjects performed a change detection task requiring the retention of two or four novel shapes. On a portion of trials, a novel, task-irrelevant bilateral checkerboard probe was presented mid-way through the delay. Analyses focused on examining correlations between set-size dependent increases in ABP and changes in the magnitude of the P1, N1 and P3a components of the probe-evoked response and how such increases might be related to behavior. Results revealed that increased delay-period ABP was associated with changes in the amplitude of the N1 and P3a event-related potential (ERP) components, and with load-dependent changes in capacity when the probe was presented during the delay. We conclude that load-dependent increases in ABP likely play a role in supporting short-term retention by gating task-irrelevant sensory inputs and suppressing potential sources of disruptive interference. PMID:28555099

  17. Sensory convergence in the parieto-insular vestibular cortex

    PubMed Central

    Shinder, Michael E.

    2014-01-01

    Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533

  18. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    PubMed

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  19. TESSIM: a simulator for the Athena-X-IFU

    NASA Astrophysics Data System (ADS)

    Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; den Hartog, R. H.; Bandler, S. R.; de Plaa, J.; den Herder, J.-W. A.

    2016-07-01

    We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS- files which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http://www.sternwarte.uni-erlangen.de/research/sixte/).

  20. TESSIM: A Simulator for the Athena-X-IFU

    NASA Technical Reports Server (NTRS)

    Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; Den Hartog, R. H.; Bandler, S. R.; De Plaa, J.; hide

    2016-01-01

    We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS-les which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http:www.sternwarte.uni-erlangen.deresearchsixte).

  1. Developmental dyslexia and vision

    PubMed Central

    Quercia, Patrick; Feiss, Léonard; Michel, Carine

    2013-01-01

    Developmental dyslexia affects almost 10% of school-aged children and represents a significant public health problem. Its etiology is unknown. The consistent presence of phonological difficulties combined with an inability to manipulate language sounds and the grapheme–phoneme conversion is widely acknowledged. Numerous scientific studies have also documented the presence of eye movement anomalies and deficits of perception of low contrast, low spatial frequency, and high frequency temporal visual information in dyslexics. Anomalies of visual attention with short visual attention spans have also been demonstrated in a large number of cases. Spatial orientation is also affected in dyslexics who manifest a preference for spatial attention to the right. This asymmetry may be so pronounced that it leads to a veritable neglect of space on the left side. The evaluation of treatments proposed to dyslexics whether speech or oriented towards the visual anomalies remains fragmentary. The advent of new explanatory theories, notably cerebellar, magnocellular, or proprioceptive, is an incentive for ophthalmologists to enter the world of multimodal cognition given the importance of the eye’s visual input. PMID:23690677

  2. Sound effects: Multimodal input helps infants find displaced objects.

    PubMed

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  3. Mashups over the Deep Web

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; Simon, Kai; Lausen, Georg

    Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.

  4. Situation Awareness-Based Agent Transparency

    DTIC Science & Technology

    2014-04-01

    metacognition , the concept of which represents a person’s understanding of what thought processes are going on within oneself (Flavell, 1976). In this...context, metacognition refers to where the ASM provides input as to what types of information it is missing in order for the ASM to perform at its...action or activity that an end user must take. Dowse et al. (2010) found that visuals that were simple, had a clear focus, and reflected familiar life

  5. The primary visual cortex in the neural circuit for visual orienting

    NASA Astrophysics Data System (ADS)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  6. Novel Models of Visual Topographic Map Alignment in the Superior Colliculus

    PubMed Central

    El-Ghazawi, Tarek A.; Triplett, Jason W.

    2016-01-01

    The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309

  7. Influence of Visual Prism Adaptation on Auditory Space Representation.

    PubMed

    Pochopien, Klaudia; Fahle, Manfred

    2017-01-01

    Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.

  8. CoPub: a literature-based keyword enrichment tool for microarray data analysis.

    PubMed

    Frijters, Raoul; Heupers, Bart; van Beek, Pieter; Bouwhuis, Maurice; van Schaik, René; de Vlieg, Jacob; Polman, Jan; Alkema, Wynand

    2008-07-01

    Medline is a rich information source, from which links between genes and keywords describing biological processes, pathways, drugs, pathologies and diseases can be extracted. We developed a publicly available tool called CoPub that uses the information in the Medline database for the biological interpretation of microarray data. CoPub allows batch input of multiple human, mouse or rat genes and produces lists of keywords from several biomedical thesauri that are significantly correlated with the set of input genes. These lists link to Medline abstracts in which the co-occurring input genes and correlated keywords are highlighted. Furthermore, CoPub can graphically visualize differentially expressed genes and over-represented keywords in a network, providing detailed insight in the relationships between genes and keywords, and revealing the most influential genes as highly connected hubs. CoPub is freely accessible at http://services.nbic.nl/cgi-bin/copub/CoPub.pl.

  9. Brain architecture of the Pacific White Shrimp Penaeus vannamei Boone, 1931 (Malacostraca, Dendrobranchiata): correspondence of brain structure and sensory input?

    PubMed

    Meth, Rebecca; Wittfoth, Christin; Harzsch, Steffen

    2017-08-01

    Penaeus vannamei (Dendrobranchiata, Decapoda) is best known as the "Pacific White Shrimp" and is currently the most important crustacean in commercial aquaculture worldwide. Although the neuroanatomy of crustaceans has been well examined in representatives of reptant decapods ("ground-dwelling decapods"), there are only a few studies focusing on shrimps and prawns. In order to obtain insights into the architecture of the brain of P. vannamei, we use neuroanatomical methods including X-ray micro-computed tomography, 3D reconstruction and immunohistochemical staining combined with confocal laser-scanning microscopy and serial sectioning. The brain of P. vannamei exhibits all the prominent neuropils and tracts that characterize the ground pattern of decapod crustaceans. However, the size proportion of some neuropils is salient. The large lateral protocerebrum that comprises the visual neuropils as well as the hemiellipsoid body and medulla terminalis is remarkable. This observation corresponds with the large size of the compound eyes of these animals. In contrast, the remaining median part of the brain is relatively small. It is dominated by the paired antenna 2 neuropils, while the deutocerebral chemosensory lobes play a minor role. Our findings suggest that visual input from the compound eyes and mechanosensory input from the second pair of antennae are major sensory modalities, which this brain processes.

  10. PrimerMapper: high throughput primer design and graphical assembly for PCR and SNP detection

    PubMed Central

    O’Halloran, Damien M.

    2016-01-01

    Primer design represents a widely employed gambit in diverse molecular applications including PCR, sequencing, and probe hybridization. Variations of PCR, including primer walking, allele-specific PCR, and nested PCR provide specialized validation and detection protocols for molecular analyses that often require screening large numbers of DNA fragments. In these cases, automated sequence retrieval and processing become important features, and furthermore, a graphic that provides the user with a visual guide to the distribution of designed primers across targets is most helpful in quickly ascertaining primer coverage. To this end, I describe here, PrimerMapper, which provides a comprehensive graphical user interface that designs robust primers from any number of inputted sequences while providing the user with both, graphical maps of primer distribution for each inputted sequence, and also a global assembled map of all inputted sequences with designed primers. PrimerMapper also enables the visualization of graphical maps within a browser and allows the user to draw new primers directly onto the webpage. Other features of PrimerMapper include allele-specific design features for SNP genotyping, a remote BLAST window to NCBI databases, and remote sequence retrieval from GenBank and dbSNP. PrimerMapper is hosted at GitHub and freely available without restriction. PMID:26853558

  11. Learning Complex Grammar in the Virtual Classroom: A Comparison of Processing Instruction, Structured Input, Computerized Visual Input Enhancement, and Traditional Instruction

    ERIC Educational Resources Information Center

    Russell, Victoria

    2012-01-01

    This study investigated the effects of processing instruction (PI) and structured input (SI) on the acquisition of the subjunctive in adjectival clauses by 92 second-semester distance learners of Spanish. Computerized visual input enhancement (VIE) was combined with PI and SI in an attempt to increase the salience of the targeted grammatical form…

  12. Objective visual assessment of antiangiogenic treatment for wet age-related macular degeneration.

    PubMed

    Baseler, Heidi A; Gouws, André; Crossland, Michael D; Leung, Carmen; Tufail, Adnan; Rubin, Gary S; Morland, Antony B

    2011-10-01

    To assess cortical responses in patients undergoing antiangiogenic treatment for wet age-related macular degeneration (AMD) using functional magnetic resonance imaging (fMRI) as an objective, fixation-independent measure of topographic visual function. A patient with bilateral neovascular AMD was scanned using fMRI before and at regular intervals while undergoing treatment with intravitreal antiangiogenic injections (ranibizumab). Blood oxygenation level-dependent signals were measured in the brain while the patient viewed a stimulus consisting of a full-field flickering (6 Hz) white light alternating with a uniform gray background (18 s on and 18 s off). Topographic distribution and magnitude of activation in visual cortex were compared longitudinally throughout the treatment period (<1 year) and with control patients not currently undergoing treatment. Clinical behavioral tests were also administered, including visual acuity, microperimetry, and reading skills. The area of visual cortex activated increased significantly after the first treatment to include more posterior cortex that normally receives inputs from lesioned parts of the retina. Subsequent treatments yielded no significant further increase in activation area. Behavioral measures all generally showed an improvement with treatment but did not always parallel one another. The untreated control patient showed a consistent lack of significant response in the cortex representing retinal lesions. Retinal treatments may not only improve vision but also result in a concomitant improvement in fixation stability. Current clinical behavioral measures (e.g., acuity and perimetry) are largely dependent on fixation stability and therefore cannot separate improvements of visual function from fixation improvements. fMRI, which provides an objective and sensitive measure of visual function independent of fixation, reveals a significant increase in visual cortical responses in patients with wet AMD after treatment with antiangiogenic injections. Despite recent evidence that visual cortex degenerates subsequent to retinal lesions, our results indicate that it can remain responsive as its inputs are restored.

  13. Face familiarity promotes stable identity recognition: exploring face perception using serial dependence

    PubMed Central

    Kok, Rebecca; Van der Burg, Erik; Rhodes, Gillian; Alais, David

    2017-01-01

    Studies suggest that familiar faces are processed in a manner distinct from unfamiliar faces and that familiarity with a face confers an advantage in identity recognition. Our visual system seems to capitalize on experience to build stable face representations that are impervious to variation in retinal input that may occur due to changes in lighting, viewpoint, viewing distance, eye movements, etc. Emerging evidence also suggests that our visual system maintains a continuous perception of a face's identity from one moment to the next despite the retinal input variations through serial dependence. This study investigates whether interactions occur between face familiarity and serial dependence. In two experiments, participants used a continuous scale to rate attractiveness of unfamiliar and familiar faces (either experimentally learned or famous) presented in rapid sequences. Both experiments revealed robust inter-trial effects in which attractiveness ratings for a given face depended on the preceding face's attractiveness. This inter-trial attractiveness effect was most pronounced for unfamiliar faces. Indeed, when participants were familiar with a given face, attractiveness ratings showed significantly less serial dependence. These results represent the first evidence that familiar faces can resist the temporal integration seen in sequential dependencies and highlight the importance of familiarity to visual cognition. PMID:28405355

  14. Pure JavaScript Storyline Layout Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is a JavaScript library for a storyline layout algorithm. Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The library in this disclosure takes as input a list of objects containing an id, time, and state. The output is a data structure that can be used to conveniently render a storyline visualization. Most importantly, the library computes the y-coordinate of the entities over time that decreases layout artifacts including crossings, wiggles, and whitespace. This is accomplished through multi-objective, multi-stage optimizationmore » problem, where the output of one stage produces input and constraints for the next stage.« less

  15. Real time speech formant analyzer and display

    DOEpatents

    Holland, George E.; Struve, Walter S.; Homer, John F.

    1987-01-01

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.

  16. Real time speech formant analyzer and display

    DOEpatents

    Holland, G.E.; Struve, W.S.; Homer, J.F.

    1987-02-03

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user. 19 figs.

  17. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  18. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    PubMed

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  19. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  20. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  1. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  2. Graphing trillions of triangles

    PubMed Central

    Burkhardt, Paul

    2016-01-01

    The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed. PMID:28690426

  3. PathFinder: reconstruction and dynamic visualization of metabolic pathways.

    PubMed

    Goesmann, Alexander; Haubrock, Martin; Meyer, Folker; Kalinowski, Jörn; Giegerich, Robert

    2002-01-01

    Beyond methods for a gene-wise annotation and analysis of sequenced genomes new automated methods for functional analysis on a higher level are needed. The identification of realized metabolic pathways provides valuable information on gene expression and regulation. Detection of incomplete pathways helps to improve a constantly evolving genome annotation or discover alternative biochemical pathways. To utilize automated genome analysis on the level of metabolic pathways new methods for the dynamic representation and visualization of pathways are needed. PathFinder is a tool for the dynamic visualization of metabolic pathways based on annotation data. Pathways are represented as directed acyclic graphs, graph layout algorithms accomplish the dynamic drawing and visualization of the metabolic maps. A more detailed analysis of the input data on the level of biochemical pathways helps to identify genes and detect improper parts of annotations. As an Relational Database Management System (RDBMS) based internet application PathFinder reads a list of EC-numbers or a given annotation in EMBL- or Genbank-format and dynamically generates pathway graphs.

  4. Contextual modulation and stimulus selectivity in extrastriate cortex.

    PubMed

    Krause, Matthew R; Pack, Christopher C

    2014-11-01

    Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Hemifield columns co-opt ocular dominance column structure in human achiasma.

    PubMed

    Olman, Cheryl A; Bao, Pinglei; Engel, Stephen A; Grant, Andrea N; Purington, Chris; Qiu, Cheng; Schallmo, Michael-Paul; Tjan, Bosco S

    2018-01-01

    In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T 2 -weighted 2D Spin Echo images were acquired at 0.8mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  7. Mechanisms of Neuronal Computation in Mammalian Visual Cortex

    PubMed Central

    Priebe, Nicholas J.; Ferster, David

    2012-01-01

    Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306

  8. Beta oscillations define discrete perceptual cycles in the somatosensory domain.

    PubMed

    Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim

    2015-09-29

    Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.

  9. Comprehension of Navigation Directions

    NASA Technical Reports Server (NTRS)

    Schneider, Vivian I.; Healy, Alice F.

    2000-01-01

    In an experiment simulating communication between air traffic controllers and pilots, subjects were given navigation instructions varying in length telling them to move in a space represented by grids on a computer screen. The subjects followed the instructions by clicking on the grids in the locations specified. Half of the subjects read the instructions, and half heard them. Half of the subjects in each modality condition repeated back the instructions before following them,and half did not. Performance was worse for the visual than for the auditory modality on the longer messages. Repetition of the instructions generally depressed performance, especially with the longer messages, which required more output than did the shorter messages, and especially with the visual modality, in which phonological recoding from the visual input to the spoken output was necessary. These results are explained in terms of the degrading effects of output interference on memory for instructions.

  10. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  11. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  12. Fixed base simulator study of an externally blown flap STOL transport airplane during approach and landing

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Nguyen, L. T.; Patton, J. M., Jr.; Deal, P. L.; Champine, R. A.; Carter, C. R.

    1972-01-01

    A fixed-base simulator study was conducted to determine the flight characteristics of a representative STOL transport having a high wing and equipped with an external-flow jet flap in combination with four high-bypass-ratio fan-jet engines during the approach and landing. Real-time digital simulation techniques were used. The computer was programed with equations of motion for six degrees of freedom and the aerodynamic inputs were based on measured wind-tunnel data. A visual display of a STOL airport was provided for simulation of the flare and touchdown characteristics. The primary piloting task was an instrument approach to a breakout at a 200-ft ceiling with a visual landing.

  13. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  14. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  15. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  16. Semantic-based crossmodal processing during visual suppression.

    PubMed

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.

  17. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  18. 'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2017-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  19. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  20. Distinct GABAergic targets of feedforward and feedback connections between lower and higher areas of rat visual cortex.

    PubMed

    Gonchar, Yuri; Burkhalter, Andreas

    2003-11-26

    Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.

  1. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Influence of visual inputs on quasi-static standing postural steadiness in individuals with spinal cord injury.

    PubMed

    Lemay, Jean-François; Gagnon, Dany; Duclos, Cyril; Grangeon, Murielle; Gauthier, Cindy; Nadeau, Sylvie

    2013-06-01

    Postural steadiness while standing is impaired in individuals with spinal cord injury (SCI) and could be potentially associated with increased reliance on visual inputs. The purpose of this study was to compare individuals with SCI and able-bodied participants on their use of visual inputs to maintain standing postural steadiness. Another aim was to quantify the association between visual contribution to achieve postural steadiness and a clinical balance scale. Individuals with SCI (n = 15) and able-bodied controls (n = 14) performed quasi-static stance, with eyes open or closed, on force plates for two 45 s trials. Measurements of the centre of pressure (COP) included the mean value of the root mean square (RMS), mean COP velocity (MV) and COP sway area (SA). Individuals with SCI were also evaluated with the Mini-Balance Evaluation Systems Test (Mini BESTest), a clinical outcome measure of postural steadiness. Individuals with SCI were significantly less stable than able-bodied controls in both conditions. The Romberg ratios (eyes open/eyes closed) for COP MV and SA were significantly higher for individuals with SCI, indicating a higher contribution of visual inputs for postural steadiness in that population. Romberg ratios for RMS and SA were significantly associated with the Mini-BESTest. This study highlights the contribution of visual inputs in individuals with SCI when maintaining quasi-static standing posture. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Visual vertigo analogue scale: an assessment questionnaire for visual vertigo.

    PubMed

    Dannenbaum, Elizabeth; Chilingaryan, Gevorg; Fung, Joyce

    2011-01-01

    A common symptom for people with vestibulopathy is dizziness induced by dynamic visual input, known as visual vertigo (VV). The goal of this study is to present a novel method to assess VV, using a nine-item analog scale. The subjects rated the intensity of their dizziness on each item of the Visual Vertigo Analogue Scale (VVAS), which represented a daily situation typically inducing VV. The questionnaire was completed by participants with vestibulopathy (n=102) and by subjects receiving out-patient orthopaedic physiotherapy (n=102). The dizziness handicap inventory (DHI) was also completed by the vestibulopathic group. The Cronbach's Alpha index indicated the VVAS is internally consistent and reliable (Cronbach's Alpha=0.94). The study also found that the VVAS severity scores from vestibular and a non-vestibular population were significantly different (Wilcoxon-Mann Whitney test p < 0.0001). Spearman correlation analysis conducted between DHI and VVAS scores for the clients with vestibulopathy showed positive moderate correlations between the VVAS score and the total DHI score (r=0.67, p< 0.0001). This study showed that the VVAS scale may be useful in providing a quantitative evaluation scale of visual vertigo.

  4. Object-based attentional selection modulates anticipatory alpha oscillations

    PubMed Central

    Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán

    2015-01-01

    Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554

  5. TRIGRS - A Fortran Program for Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis, Version 2.0

    USGS Publications Warehouse

    Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.

    2008-01-01

    The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.

  6. The Evolution and Development of Neural Superposition

    PubMed Central

    Agi, Egemen; Langen, Marion; Altschuler, Steven J.; Wu, Lani F.; Zimmermann, Timo

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically “hard-wired” synaptic connectivity in the brain. PMID:24912630

  7. The evolution and development of neural superposition.

    PubMed

    Agi, Egemen; Langen, Marion; Altschuler, Steven J; Wu, Lani F; Zimmermann, Timo; Hiesinger, Peter Robin

    2014-01-01

    Visual systems have a rich history as model systems for the discovery and understanding of basic principles underlying neuronal connectivity. The compound eyes of insects consist of up to thousands of small unit eyes that are connected by photoreceptor axons to set up a visual map in the brain. The photoreceptor axon terminals thereby represent neighboring points seen in the environment in neighboring synaptic units in the brain. Neural superposition is a special case of such a wiring principle, where photoreceptors from different unit eyes that receive the same input converge upon the same synaptic units in the brain. This wiring principle is remarkable, because each photoreceptor in a single unit eye receives different input and each individual axon, among thousands others in the brain, must be sorted together with those few axons that have the same input. Key aspects of neural superposition have been described as early as 1907. Since then neuroscientists, evolutionary and developmental biologists have been fascinated by how such a complicated wiring principle could evolve, how it is genetically encoded, and how it is developmentally realized. In this review article, we will discuss current ideas about the evolutionary origin and developmental program of neural superposition. Our goal is to identify in what way the special case of neural superposition can help us answer more general questions about the evolution and development of genetically "hard-wired" synaptic connectivity in the brain.

  8. The effect of visual-vestibulosomatosensory conflict induced by virtual reality on postural stability in humans.

    PubMed

    Nishiike, Suetaka; Okazaki, Suzuyo; Watanabe, Hiroshi; Akizuki, Hironori; Imai, Takao; Uno, Atsuhiko; Kitahara, Tadashi; Horii, Arata; Takeda, Noriaki; Inohara, Hidenori

    2013-01-01

    In this study, we examined the effects of sensory inputs of visual-vestibulosomatosensory conflict induced by virtual reality (VR) on subjective dizziness, posture stability and visual dependency on postural control in humans. Eleven healthy young volunteers were immersed in two different VR conditions. In the control condition, subjects walked voluntarily with the background images of interactive computer graphics proportionally synchronized to their walking pace. In the visual-vestibulosomatosensory conflict condition, subjects kept still, but the background images that subjects experienced in the control condition were presented. The scores of both Graybiel's and Hamilton's criteria, postural instability and Romberg ratio were measured before and after the two conditions. After immersion in the conflict condition, both subjective dizziness and objective postural instability were significantly increased, and Romberg ratio, an index of the visual dependency on postural control, was slightly decreased. These findings suggest that sensory inputs of visual-vestibulosomatosensory conflict induced by VR induced motion sickness, resulting in subjective dizziness and postural instability. They also suggest that adaptation to the conflict condition decreases the contribution of visual inputs to postural control with re-weighing of vestibulosomatosensory inputs. VR may be used as a rehabilitation tool for dizzy patients by its ability to induce sensory re-weighing of postural control.

  9. Top-down processing of symbolic meanings modulates the visual word form area.

    PubMed

    Song, Yiying; Tian, Moqian; Liu, Jia

    2012-08-29

    Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.

  10. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  11. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  12. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Target dependence of orientation and direction selectivity of corticocortical projection neurons in the mouse V1

    PubMed Central

    Matsui, Teppei; Ohki, Kenichi

    2013-01-01

    Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987

  14. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  15. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  16. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  17. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, Albert Comstock

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  18. Postural response to predictable and nonpredictable visual flow in children and adults.

    PubMed

    Schmuckler, Mark A

    2017-11-01

    Children's (3-5years) and adults' postural reactions to different conditions of visual flow information varying in its frequency content was examined using a moving room apparatus. Both groups experienced four conditions of visual input: low-frequency (0.20Hz) visual oscillations, high-frequency (0.60Hz) oscillations, multifrequency nonpredictable visual input, and no imposed visual information. Analyses of the frequency content of anterior-posterior (AP) sway revealed that postural reactions to the single-frequency conditions replicated previous findings; children were responsive to low- and high-frequency oscillations, whereas adults were responsive to low-frequency information. Extending previous work, AP sway in response to the nonpredictable condition revealed that both groups were responsive to the different components contained in the multifrequency visual information, although adults retained their frequency selectivity to low-frequency versus high-frequency content. These findings are discussed in relation to work examining feedback versus feedforward control of posture, and the reweighting of sensory inputs for postural control, as a function of development and task context. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  20. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  1. Guasom Analysis Of The Alhambra Survey

    NASA Astrophysics Data System (ADS)

    Garabato, Daniel; Manteiga, Minia; Dafonte, Carlos; Álvarez, Marco A.

    2017-10-01

    GUASOM is a data mining tool designed for knowledge discovery in large astronomical spectrophotometric archives developed in the framework of Gaia DPAC (Data Processing and Analysis Consortium). Our tool is based on a type of unsupervised learning Artificial Neural Networks named Self-organizing maps (SOMs). SOMs permit the grouping and visualization of big amount of data for which there is no a priori knowledge and hence they are very useful for analyzing the huge amount of information present in modern spectrophotometric surveys. SOMs are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Each cluster has a representative, called prototype which is a virtual pattern that better represents or resembles the set of input patterns belonging to such a cluster. Prototypes make easier the task of determining the physical nature and properties of the objects populating each cluster. Our algorithm has been tested on the ALHAMBRA survey spectrophotometric observations, here we present our results concerning the survey segmentation, visualization of the data structure, separation between types of objects (stars and galaxies), data homogeneity of neurons, cluster prototypes, redshift distribution and crossmatch with other databases (Simbad).

  2. Assessment of visual disability using the WHO disability assessment scale (WHO-DAS-II): role of gender.

    PubMed

    Badr, H E; Mourad, H

    2009-10-01

    To study the role of gender in coping with disability in young visually impaired students attending two schools for blindness. The WHO Disability Assessment Schedule (WHODAS II), 36-Item Interviewer Administered translated Arabic version was used. It evaluates six domains of everyday living in the last 30 days. These domains are: understanding and communicating, getting around, self care, getting along with people, household activities and participation in society. Face-to-face interviews were conducted with 200 students who represented the target population of the study. Binary logistic regression analysis of the scores of the six domains revealed that in all of the domains except getting along with people and coping with school activities, females significantly faced more difficulties in coping with daily life activities than did their male counterparts. Increasing age significantly increased difficulties in coping with school activities. Genetic causes of blindness were associated with increased difficulties. Females face more difficulties in coping with visual disability. Genetic counselling is needed to decrease the prevalence of visual disability. Girls with blindness need additional inputs to help cope with blindness. Early intervention facilitates dealing with school activities of the visually impaired.

  3. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  4. Postural Ataxia in Cerebellar Downbeat Nystagmus: Its Relation to Visual, Proprioceptive and Vestibular Signals and Cerebellar Atrophy.

    PubMed

    Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas

    2017-01-01

    The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg's ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg's ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia.

  5. Postural Ataxia in Cerebellar Downbeat Nystagmus: Its Relation to Visual, Proprioceptive and Vestibular Signals and Cerebellar Atrophy

    PubMed Central

    Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas

    2017-01-01

    Background The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. Objectives The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Methods Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Results Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg’s ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg’s ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Conclusions Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia. PMID:28056109

  6. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  7. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  8. Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829

  9. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    PubMed

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  10. Fine and distributed subcellular retinotopy of excitatory inputs to the dendritic tree of a collision-detecting neuron

    PubMed Central

    Zhu, Ying

    2016-01-01

    Individual neurons in several sensory systems receive synaptic inputs organized according to subcellular topographic maps, yet the fine structure of this topographic organization and its relation to dendritic morphology have not been studied in detail. Subcellular topography is expected to play a role in dendritic integration, particularly when dendrites are extended and active. The lobula giant movement detector (LGMD) neuron in the locust visual system is known to receive topographic excitatory inputs on part of its dendritic tree. The LGMD responds preferentially to objects approaching on a collision course and is thought to implement several interesting dendritic computations. To study the fine retinotopic mapping of visual inputs onto the excitatory dendrites of the LGMD, we designed a custom microscope allowing visual stimulation at the native sampling resolution of the locust compound eye while simultaneously performing two-photon calcium imaging on excitatory dendrites. We show that the LGMD receives a distributed, fine retinotopic projection from the eye facets and that adjacent facets activate overlapping portions of the same dendritic branches. We also demonstrate that adjacent retinal inputs most likely make independent synapses on the excitatory dendrites of the LGMD. Finally, we show that the fine topographic mapping can be studied using dynamic visual stimuli. Our results reveal the detailed structure of the dendritic input originating from individual facets on the eye and their relation to that of adjacent facets. The mapping of visual space onto the LGMD's dendrites is expected to have implications for dendritic computation. PMID:27009157

  11. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.

    PubMed

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  12. Alpha oscillations correlate with the successful inhibition of unattended stimuli.

    PubMed

    Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole

    2011-09-01

    Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.

  13. Neural network system for purposeful behavior based on foveal visual preprocessor

    NASA Astrophysics Data System (ADS)

    Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.

    1996-10-01

    Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.

  14. Closed head injury and perceptual processing in dual-task situations.

    PubMed

    Hein, G; Schubert, T; von Cramon, D Y

    2005-01-01

    Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.

  15. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  16. Correspondence between visual and electrical input filters of ON and OFF mouse retinal ganglion cells

    NASA Astrophysics Data System (ADS)

    Sekhar, S.; Jalligampala, A.; Zrenner, E.; Rathbun, D. L.

    2017-08-01

    Objective. Over the past two decades retinal prostheses have made major strides in restoring functional vision to patients blinded by diseases such as retinitis pigmentosa. Presently, implants use single pulses to activate the retina. Though this stimulation paradigm has proved beneficial to patients, an unresolved problem is the inability to selectively stimulate the on and off visual pathways. To this end our goal was to test, using white noise, voltage-controlled, cathodic, monophasic pulse stimulation, whether different retinal ganglion cell (RGC) types in the wild type retina have different electrical input filters. This is an important precursor to addressing pathway-selective stimulation. Approach. Using full-field visual flash and electrical and visual Gaussian noise stimulation, combined with the technique of spike-triggered averaging (STA), we calculate the electrical and visual input filters for different types of RGCs (classified as on, off or on-off based on their response to the flash stimuli). Main results. Examining the STAs, we found that the spiking activity of on cells during electrical stimulation correlates with a decrease in the voltage magnitude preceding a spike, while the spiking activity of off cells correlates with an increase in the voltage preceding a spike. No electrical preference was found for on-off cells. Comparing STAs of wild type and rd10 mice revealed narrower electrical STA deflections with shorter latencies in rd10. Significance. This study is the first comparison of visual cell types and their corresponding temporal electrical input filters in the retina. The altered input filters in degenerated rd10 retinas are consistent with photoreceptor stimulation underlying visual type-specific electrical STA shapes in wild type retina. It is therefore conceivable that existing implants could target partially degenerated photoreceptors that have only lost their outer segments, but not somas, to selectively activate the on and off visual pathways.

  17. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  18. Influence of callosal transfer on visual cortical evoked response and the implication in the development of a visual prosthesis.

    PubMed

    Siu, Timothy L; Morley, John W

    2007-12-01

    The development of a visual prosthesis has been limited by an incomplete understanding of functional changes of the visual cortex accompanying deafferentation. In particular, the role of the corpus callosum in modulating these changes has not been fully evaluated. Recent experimental evidence suggests that through synaptic modulation, short-term (4-5 days) visual deafferentation can induce plastic changes in the visual cortex, leading to adaptive enhancement of residual visual input. We therefore investigated whether a compensatory rerouting of visual information can occur via the indirect transcallosal linkage after deafferentation and the influence of this interhemispheric communication on the visual evoked response of each hemisphere. In albino rabbits, misrouting of uncrossed optic fibres reduces ipsilateral input to a negligible degree. We thus took advantage of this congenital anomaly to model unilateral cortical and ocular deafferentation by eliminating visual input from one eye and recorded the visual evoked potential (VEP) from the intact eye. In keeping with the chiasmal anomaly, no VEP was elicited from the hemisphere ipsilateral to the intact eye. This remained unchanged following unilateral visual deafferentation. The amplitude and latency of the VEP in the fellow hemisphere, however, were significantly decreased in the deafferented animals. Our data suggest that callosal linkage does not contribute to visual evoked responses and this is not changed after short-term deafferentation. The decrease in amplitude and latency of evoked responses in the hemisphere ipsilateral to the treated eye, however, confirms the facilitatory role of callosal transfer. This observation highlights the importance of bicortical stimulation in the future design of a cortical visual prosthesis.

  19. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  20. How does visual manipulation affect obstacle avoidance strategies used by athletes?

    PubMed

    Bijman, M P; Fisher, J J; Vallis, L A

    2016-01-01

    Research examining our ability to avoid obstacles in our path has stressed the importance of visual input. The aim of this study was to determine if athletes playing varsity-level field sports, who rely on visual input to guide motor behaviour, are more able to guide their foot over obstacles compared to recreational individuals. While wearing kinematic markers, eight varsity athletes and eight age-matched controls (aged 18-25) walked along a walkway and stepped over stationary obstacles (180° motion arc). Visual input was manipulated using PLATO visual goggles three or two steps pre-obstacle crossing and compared to trials where vision was given throughout. A main effect between groups for peak trail toe elevation was shown with greater values generated by the controls for all crossing conditions during full vision trials only. This may be interpreted as athletes not perceiving this obstacle as an increased threat to their postural stability. Collectively, findings suggest the athletic group is able to transfer their abilities to non-specific conditions during full vision trials; however, varsity-level athletes were equally reliant on visual cues for these visually guided stepping tasks as their performance was similar to the controls when vision is removed.

  1. Visual and Auditory Input in Second-Language Speech Processing

    ERIC Educational Resources Information Center

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  2. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  3. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    PubMed Central

    Tal, Zohar; Geva, Ran; Amedi, Amir

    2016-01-01

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114

  4. Static and dynamic views of visual cortical organization.

    PubMed

    Casagrande, Vivien A; Xu, Xiangmin; Sáry, Gyula

    2002-01-01

    Without the aid of modern techniques Cajal speculated that cells in the visual cortex were connected in circuits. From Cajal's time until fairly recently, the flow of information within the cells and circuits of visual cortex has been described as progressing from input to output, from sensation to action. In this chapter we argue that a paradigm shift in our concept of the visual cortical neuron is under way. The most important change in our view concerns the neuron's functional role. Visual cortical neurons do not have static functional signatures but instead function dynamically depending on the ongoing activity of the networks to which they belong. These networks are not merely top-down or bottom-up unidirectional transmission lines, but rather represent machinery that uses recurrent information and is dynamic and highly adaptable. With the advancement of technology for analyzing the conversations of multiple neurons at many levels in the visual system and higher resolution imaging, we predict that the paradigm shift will progress to the point where neurons are no longer viewed as independent processing units but as members of subsets of networks where their role is mapped in space-time coordinates in relationship to the other neuronal members. This view moves us far from Cajal's original views of the neuron. Nevertheless, we believe that understanding the basic morphology and wiring of networks will continue to contribute to our overall understanding of the visual cortex.

  5. Adaptation to sensory input tunes visual cortex to criticality

    NASA Astrophysics Data System (ADS)

    Shew, Woodrow L.; Clawson, Wesley P.; Pobst, Jeff; Karimipanah, Yahya; Wright, Nathaniel C.; Wessel, Ralf

    2015-08-01

    A long-standing hypothesis at the interface of physics and neuroscience is that neural networks self-organize to the critical point of a phase transition, thereby optimizing aspects of sensory information processing. This idea is partially supported by strong evidence for critical dynamics observed in the cerebral cortex, but the impact of sensory input on these dynamics is largely unknown. Thus, the foundations of this hypothesis--the self-organization process and how it manifests during strong sensory input--remain unstudied experimentally. Here we show in visual cortex and in a computational model that strong sensory input initially elicits cortical network dynamics that are not critical, but adaptive changes in the network rapidly tune the system to criticality. This conclusion is based on observations of multifaceted scaling laws predicted to occur at criticality. Our findings establish sensory adaptation as a self-organizing mechanism that maintains criticality in visual cortex during sensory information processing.

  6. Impact of enhanced sensory input on treadmill step frequency: infants born with myelomeningocele.

    PubMed

    Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D

    2011-01-01

    To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Twenty-seven infants aged 2 to 10 months with MMC lesions at, or caudal to, L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30 seconds long. Enhanced sensory inputs within each set were presented in random order and included baseline, visual flow, unloading, weights, Velcro, and friction. Overall friction and visual flow significantly increased step rate, particularly for the older subjects. Friction and Velcro increased stance-phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. : Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear to be more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC.

  7. Impact of Enhanced Sensory Input on Treadmill Step Frequency: Infants Born With Myelomeningocele

    PubMed Central

    Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D.

    2012-01-01

    Purpose To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Methods Twenty seven infants aged 2 to 10 months with MMC lesions at or caudal to L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30s long. Enhanced sensory inputs within each set were presented in random order and included: baseline, visual flow, unloading, weights, Velcro and friction. Results Overall friction and visual flow significantly increased step rate, particularly for the older group. Friction and Velcro increased stance phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. Conclusions Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC. PMID:21266940

  8. Anatomy and physiology of the afferent visual system.

    PubMed

    Prasad, Sashank; Galetta, Steven L

    2011-01-01

    The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  11. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637

  12. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture.

    PubMed

    Trivedi, Chintan A; Bollmann, Johann H

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  13. Perceptual adaptation in the use of night vision goggles

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1992-01-01

    The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.

  14. Virtual reality in the operating room of the future.

    PubMed

    Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R

    1997-01-01

    In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.

  15. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  16. Young children's recall and reconstruction of audio and audiovisual narratives.

    PubMed

    Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C

    1986-08-01

    It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.

  17. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  18. R-based Tool for a Pairwise Structure-activity Relationship Analysis.

    PubMed

    Klimenko, Kyrylo

    2018-04-01

    The Structure-Activity Relationship analysis is a complex process that can be enhanced by computational techniques. This article describes a simple tool for SAR analysis that has a graphic user interface and a flexible approach towards the input of molecular data. The application allows calculating molecular similarity represented by Tanimoto index & Euclid distance, as well as, determining activity cliffs by means of Structure-Activity Landscape Index. The calculation is performed in a pairwise manner either for the reference compound and other compounds or for all possible pairs in the data set. The results of SAR analysis are visualized using two types of plot. The application capability is demonstrated by the analysis of a set of COX2 inhibitors with respect to Isoxicam. This tool is available online: it includes manual and input file examples. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  20. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  1. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    ERIC Educational Resources Information Center

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  2. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  3. FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.

    PubMed

    Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M

    2003-03-01

    A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.

  4. Standing postural reaction to visual and proprioceptive stimulation in chronic acquired demyelinating polyneuropathy.

    PubMed

    Provost, Clement P; Tasseel-Ponche, Sophie; Lozeron, Pierre; Piccinini, Giulia; Quintaine, Victorine; Arnulf, Bertrand; Kubis, Nathalie; Yelnik, Alain P

    2018-02-28

    To investigate the weight of visual and proprioceptive inputs, measured indirectly in standing position control, in patients with chronic acquired demyelinating polyneuropathy (CADP). Prospective case study. Twenty-five patients with CADP and 25 healthy controls. Posture was recorded on a double force platform. Stimulations were optokinetic (60°/s) for visual input and vibration (50 Hz) for proprioceptive input. Visual stimulation involved 4 tests (upward, downward, rightward and leftward) and proprioceptive stimulation 2 tests (triceps surae and tibialis anterior). A composite score, previously published and slightly modified, was used for the recorded postural signals from the different stimulations. Despite their sensitivity deficits, patients with CADP were more sensitive to proprioceptive stimuli than were healthy controls (mean composite score 13.9 ((standard deviation; SD) 4.8) vs 18.4 (SD 4.8), p = 0.002). As expected, they were also more sensitive to visual stimuli (mean composite score 10.5 (SD 8.7) vs 22.9 (SD 7.5), p <0.0001). These results encourage balance rehabilitation of patients with CADP, aimed at promoting the use of proprioceptive information, thereby reducing too-early development of visual compensation while proprioception is still available.

  5. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  6. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  7. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  8. xiSPEC: web-based visualization, analysis and sharing of proteomics data.

    PubMed

    Kolbowski, Lars; Combe, Colin; Rappsilber, Juri

    2018-05-08

    We present xiSPEC, a standard compliant, next-generation web-based spectrum viewer for visualizing, analyzing and sharing mass spectrometry data. Peptide-spectrum matches from standard proteomics and cross-linking experiments are supported. xiSPEC is to date the only browser-based tool supporting the standardized file formats mzML and mzIdentML defined by the proteomics standards initiative. Users can either upload data directly or select files from the PRIDE data repository as input. xiSPEC allows users to save and share their datasets publicly or password protected for providing access to collaborators or readers and reviewers of manuscripts. The identification table features advanced interaction controls and spectra are presented in three interconnected views: (i) annotated mass spectrum, (ii) peptide sequence fragmentation key and (iii) quality control error plots of matched fragments. Highlighting or selecting data points in any view is represented in all other views. Views are interactive scalable vector graphic elements, which can be exported, e.g. for use in publication. xiSPEC allows for re-annotation of spectra for easy hypothesis testing by modifying input data. xiSPEC is freely accessible at http://spectrumviewer.org and the source code is openly available on https://github.com/Rappsilber-Laboratory/xiSPEC.

  9. Visualizing the qualitative: making sense of written comments from an evaluative satisfaction survey.

    PubMed

    Bletzer, Keith V

    2015-01-01

    Satisfaction surveys are common in the field of health education, as a means of assisting organizations to improve the appropriateness of training materials and the effectiveness of facilitation-presentation. Data can be qualitative of which analysis often become specialized. This technical article aims to reveal whether qualitative survey results can be visualized by presenting them as a Word Cloud. Qualitative materials in the form of written comments on an agency-specific satisfaction survey were coded and quantified. The resulting quantitative data were used to convert comments into "input terms" to generate Word Clouds to increase comprehension and accessibility through visualization of the written responses. A three-tier display incorporated a Word Cloud at the top, followed by the corresponding frequency table, and a textual summary of the qualitative data represented by the Word Cloud imagery. This mixed format adheres to recognition that people vary in what format is most effective for assimilating new information. The combination of visual representation through Word Clouds complemented by quantified qualitative materials is one means of increasing comprehensibility for a range of stakeholders, who might not be familiar with numerical tables or statistical analyses.

  10. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  11. The effect of transcutaneous electrical nerve stimulation on postural sway on fatigued dorsi-plantar flexor.

    PubMed

    Yu, JaeHo; Lee, SoYeon; Kim, HyongJo; Seo, DongKwon; Hong, JiHeon; Lee, DongYeop

    2014-01-01

    The application of transcutaneous electrical nerve stimulation (TENS) enhances muscle weakness and static balance by muscle fatigue. It was said that TENS affects decrease of the postural sway. On the other hand, the applications of TENS to separate dorsi-plantar flexor and the comparison with and without visual input have not been studied. Thus, the aim of this study was to compare the effects of TENS on fatigued dorsi-plantar flexor with and without visual input. 13 healthy adult males and 12 females were recruited and agreed to participate as the subject (mean age 20.5 ± 1.4, total 25) in this study after a preliminary research. This experiment was a single group repeated measurements design in three days. The first day, after exercise-induced fatigue, the standing position was maintained for 30 minutes and then the postural sway was measured on eyes open(EO) and eyes closed(EC). The second, TENS was applied to dorsi flexor in standing position for 30 minutes after conducting exercise-induced fatigue. On the last day, plantar flexor applied by TENS was measured to the postural sway on EO and EC after same exercise-induced fatigue. The visual input was not statistically difference between the groups. However, when compared of dorsi-plantar flexor after applied to TENS without visual input, the postural sway of plantar flexor was lower than the dorsi flexor (p< 0.05). As the result, the application of TENS in GCM clinically decreases the postural sway with visual input it helps to stable posture control and prevent to falling down.

  12. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    PubMed

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  15. The neural circuitry of visual artistic production and appreciation: A proposition.

    PubMed

    Chakravarty, Ambar

    2012-04-01

    The nondominant inferior parietal lobule is probably a major "store house" of artistic creativity. The ventromedial prefrontal lobe (VMPFL) is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL) in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF), relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo-amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit) is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously.

  16. The neural circuitry of visual artistic production and appreciation: A proposition

    PubMed Central

    Chakravarty, Ambar

    2012-01-01

    The nondominant inferior parietal lobule is probably a major “store house” of artistic creativity. The ventromedial prefrontal lobe (VMPFL) is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL) in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF), relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo–amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit) is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously. PMID:22566716

  17. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  19. Sonification of optical coherence tomography data and images

    PubMed Central

    Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.

    2010-01-01

    Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846

  20. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Degraded attentional modulation of cortical neural populations in strabismic amblyopia

    PubMed Central

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628

  2. Degraded attentional modulation of cortical neural populations in strabismic amblyopia.

    PubMed

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.

  3. Top-down influence on the visual cortex of the blind during sensory substitution

    PubMed Central

    Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.

    2017-01-01

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776

  4. Gestural Communication With Accelerometer-Based Input Devices and Tactile Displays

    DTIC Science & Technology

    2008-12-01

    and natural terrain obstructions, or concealment often impede visual communication attempts. To overcome some of these issues, “daisy-chaining” or...the intended recipients. Moreover, visual communication demands a focus on the visual modality possibly distracting a receiving soldier’s visual

  5. Direction of Magnetoencephalography Sources Associated with Feedback and Feedforward Contributions in a Visual Object Recognition Task

    PubMed Central

    Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe

    2014-01-01

    Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356

  6. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina

    PubMed Central

    Venkataramani, Sowmya

    2016-01-01

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041

  7. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.

    PubMed

    Venkataramani, Sowmya; Taylor, W Rowland

    2016-03-16

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.

  8. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    PubMed

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  9. AH-64 IHADSS aviator vision experiences in Operation Iraqi Freedom

    NASA Astrophysics Data System (ADS)

    Hiatt, Keith L.; Rash, Clarence E.; Harris, Eric S.; McGilberry, William H.

    2004-09-01

    Forty AH-64 Apache aviators representing a total of 8564 flight hours and 2260 combat hours during Operation Iraqi Freedom and its aftermath were surveyed for their visual experiences with the AH-64's monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display in a combat environment. A major objective of this study was to determine if the frequencies of reports of visual complaints and illusions reported in the previous studies, addressing mostly benign training environments, differ in the more stressful combat environments. The most frequently reported visual complaints, both while and after flying, were visual discomfort and headache, which is consistent with previous studies. Frequencies of complaints after flying in the current study were numerically lower for all complaint types, but differences from previous studies are statistically significant only for visual discomfort and disorientation (vertigo). With the exception of "brownout/whiteout," reports of degraded visual cues in the current study were numerically lower for all types, but statistically significant only for impaired depth perception, decreased field of view, and inadvertent instrumental meteorological conditions. This study also found statistically lower reports of all static and dynamic illusions (with one exception, disorientation). This important finding is attributed to the generally flat and featureless geography present in a large portion of the Iraqi theater and to the shift in the way that the aviators use the two disparate visual inputs presented by the IHADSS monocular design (i.e., greater use of both eyes as opposed to concentrating primarily on display imagery).

  10. Cortical and Subcortical Coordination of Visual Spatial Attention Revealed by Simultaneous EEG-fMRI Recording.

    PubMed

    Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G

    2017-08-16

    Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.

  11. Three Types of Cortical L5 Neurons that Differ in Brain-Wide Connectivity and Function

    PubMed Central

    Kim, Euiseok J.; Juavinett, Ashley L.; Kyubwa, Espoir M.; Jacobs, Matthew W.; Callaway, Edward M.

    2015-01-01

    SUMMARY Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. PMID:26671462

  12. Three Types of Cortical Layer 5 Neurons That Differ in Brain-wide Connectivity and Function.

    PubMed

    Kim, Euiseok J; Juavinett, Ashley L; Kyubwa, Espoir M; Jacobs, Matthew W; Callaway, Edward M

    2015-12-16

    Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology, and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  14. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases

    PubMed Central

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668

  15. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    PubMed

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity.

  16. Input-Dependent Frequency Modulation of Cortical Gamma Oscillations Shapes Spatial Synchronization and Enables Phase Coding

    PubMed Central

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-01-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity. PMID:25679780

  17. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  18. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  19. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  20. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    PubMed

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  1. Assessing visual requirements for social context-dependent activation of the songbird song system

    PubMed Central

    Hara, Erina; Kubikova, Lubica; Hessler, Neal A.; Jarvis, Erich D.

    2008-01-01

    Social context has been shown to have a profound influence on brain activation in a wide range of vertebrate species. Best studied in songbirds, when males sing undirected song, the level of neural activity and expression of immediate early genes (IEGs) in several song nuclei is dramatically higher or lower than when they sing directed song to other birds, particularly females. This differential social context-dependent activation is independent of auditory input and is not simply dependent on the motor act of singing. These findings suggested that the critical sensory modality driving social context-dependent differences in the brain could be visual cues. Here, we tested this hypothesis by examining IEG activation in song nuclei in hemispheres to which visual input was normal or blocked. We found that covering one eye blocked visually induced IEG expression throughout both contralateral visual pathways of the brain, and reduced activation of the contralateral ventral tegmental area, a non-visual midbrain motivation-related area affected by social context. However, blocking visual input had no effect on the social context-dependent activation of the contralateral song nuclei during female-directed singing. Our findings suggest that individual sensory modalities are not direct driving forces for the social context differences in song nuclei during singing. Rather, these social context differences in brain activation appear to depend more on the general sense that another individual is present. PMID:18826930

  2. Role of feedforward geniculate inputs in the generation of orientation selectivity in the cat's primary visual cortex

    PubMed Central

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2011-01-01

    Abstract Neurones of the mammalian primary visual cortex have the remarkable property of being selective for the orientation of visual contours. It has been controversial whether the selectivity arises from intracortical mechanisms, from the pattern of afferent connectivity from lateral geniculate nucleus (LGN) to cortical cells or from the sharpening of a bias that is already present in the responses of many geniculate cells. To investigate this, we employed a variation of an electrical stimulation protocol in the LGN that has been claimed to suppress intracortical inputs and isolate the raw geniculocortical input to a striate cortical cell. Such stimulation led to a sharpening of the orientation sensitivity of geniculate cells themselves and some broadening of cortical orientation selectivity. These findings are consistent with the idea that non-specific inhibition of the signals from LGN cells which exhibit an orientation bias can generate the sharp orientation selectivity of primary visual cortical cells. This obviates the need for an excitatory convergence from geniculate cells whose receptive fields are arranged along a row in visual space as in the classical model and provides a framework for orientation sensitivity originating in the retina and getting sharpened through inhibition at higher levels of the visual pathway. PMID:21486788

  3. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  4. Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.

    PubMed

    Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul

    2018-01-08

    Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  7. Top-down influence on the visual cortex of the blind during sensory substitution.

    PubMed

    Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C

    2016-01-15

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  9. Combined contributions of feedforward and feedback inputs to bottom-up attention

    PubMed Central

    Khorsand, Peyman; Moore, Tirin; Soltani, Alireza

    2015-01-01

    In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883

  10. Cardiology Mannequin

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Education of medical students in cardiology requires access to patients having a variety of different forms of heart disease. But bringing together student, instructor and patient is a difficult and expensive process that does not benefit the patient. An alternate approach is substitution of a lifelike mannequin capable of simulating many conditions of heart disease. The mannequin pictured below, together with a related information display, is an advanced medical training system whose development benefited from NASA visual display technology and consultative input from NASA's Kennedy Space Center. The mannequin system represents more than 10 years of development effort by Dr. Michael S. Gordon, professor of cardiology at the University of Miami (Florida) School of Medicine.

  11. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology

  12. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    PubMed Central

    Honeine, Jean-Louis; Schieppati, Marco

    2014-01-01

    Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices. PMID:25339872

  13. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  14. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Visual Functions of the Thalamus

    PubMed Central

    Usrey, W. Martin; Alitto, Henry J.

    2017-01-01

    The thalamus is the heavily interconnected partner of the neocortex. All areas of the neocortex receive afferent input from and send efferent projections to specific thalamic nuclei. Through these connections, the thalamus serves to provide the cortex with sensory input, and to facilitate interareal cortical communication and motor and cognitive functions. In the visual system, the lateral geniculate nucleus (LGN) of the dorsal thalamus is the gateway through which visual information reaches the cerebral cortex. Visual processing in the LGN includes spatial and temporal influences on visual signals that serve to adjust response gain, transform the temporal structure of retinal activity patterns, and increase the signal-to-noise ratio of the retinal signal while preserving its basic content. This review examines recent advances in our understanding of LGN function and circuit organization and places these findings in a historical context. PMID:28217740

  16. The Non-Lemniscal Auditory Cortex in Ferrets: Convergence of Corticotectal Inputs in the Superior Colliculus

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.

    2010-01-01

    Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247

  17. WebStruct and VisualStruct: Web interfaces and visualization for Structure software implemented in a cluster environment.

    PubMed

    Jayashree, B; Rajgopal, S; Hoisington, D; Prasanth, V P; Chandra, S

    2008-09-24

    Structure, is a widely used software tool to investigate population genetic structure with multi-locus genotyping data. The software uses an iterative algorithm to group individuals into "K" clusters, representing possibly K genetically distinct subpopulations. The serial implementation of this programme is processor-intensive even with small datasets. We describe an implementation of the program within a parallel framework. Speedup was achieved by running different replicates and values of K on each node of the cluster. A web-based user-oriented GUI has been implemented in PHP, through which the user can specify input parameters for the programme. The number of processors to be used can be specified in the background command. A web-based visualization tool "Visualstruct", written in PHP (HTML and Java script embedded), allows for the graphical display of population clusters output from Structure, where each individual may be visualized as a line segment with K colors defining its possible genomic composition with respect to the K genetic sub-populations. The advantage over available programs is in the increased number of individuals that can be visualized. The analyses of real datasets indicate a speedup of up to four, when comparing the speed of execution on clusters of eight processors with the speed of execution on one desktop. The software package is freely available to interested users upon request.

  18. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.

    PubMed

    Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S

    2017-05-10

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.

  19. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding

    PubMed Central

    2017-01-01

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. PMID:28416597

  20. Functional and structural comparison of visual lateralization in birds – similar but still different

    PubMed Central

    Ströckens, Felix

    2014-01-01

    Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems. PMID:24723898

  1. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    PubMed

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  2. Neuronal Control of Mammalian Vocalization, with Special Reference to the Squirrel Monkey

    NASA Astrophysics Data System (ADS)

    Jürgens, Uwe

    Squirrel monkey vocalization can be considered as a suitable model for the study in humans of the neurobiological basis of nonverbal emotional vocal utterances, such as laughing, crying, and groaning. Evaluation of electrical and chemical brain stimulation data, lesioning studies, single-neurone recordings, and neuroanatomical tracing work leads to the following conclusions: The periaqueductal gray and laterally bordering tegmentum of the midbrain represent a crucial area for the production of vocalization. This area collects the various vocalization-triggering stimuli, such as auditory, visual, and somatosensory input from diverse sensory-processing structures, motivation-controlling input from some limbic structures, and volitional impulses from the anterior cingulate cortex. Destruction of this area causes mutism. It is still under dispute whether the periaqueductal region harbors the vocal pattern generator or merely couples vocalization-triggering information to motor-coordinating structures further downward in the brainstem. The periaqueductal region is connected with the phonatory motoneuron pools indirectly via one or several interneurons. The nucleus retroambiguus represents a crucial relay station for the laryngeal and expiratory component of vocalization. The articulatory component reaches the orofacial motoneuron pools via the parvocellular reticular formation. Essential proprioceptive feedback from the larynx and lungs enter the vocal-controlling network via the solitary tract nucleus.

  3. An imaging-based stochastic model for simulation of tumour vasculature

    NASA Astrophysics Data System (ADS)

    Adhikarla, Vikram; Jeraj, Robert

    2012-10-01

    A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies.

  4. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  5. Neo: an object model for handling electrophysiology data in multiple formats

    PubMed Central

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386

  6. Neo: an object model for handling electrophysiology data in multiple formats.

    PubMed

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.

  7. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  8. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  9. Lymphoma diagnosis in histopathology using a multi-stage visual learning approach

    NASA Astrophysics Data System (ADS)

    Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.

    2016-03-01

    This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.

  10. Diversity and wiring variability of visual local neurons in the Drosophila medulla M6 stratum.

    PubMed

    Chin, An-Lun; Lin, Chih-Yung; Fu, Tsai-Feng; Dickson, Barry J; Chiang, Ann-Shyn

    2014-12-01

    Local neurons in the vertebrate retina are instrumental in transforming visual inputs to extract contrast, motion, and color information and in shaping bipolar-to-ganglion cell transmission to the brain. In Drosophila, UV vision is represented by R7 inner photoreceptor neurons that project to the medulla M6 stratum, with relatively little known of this downstream substrate. Here, using R7 terminals as references, we generated a 3D volume model of the M6 stratum, which revealed a retinotopic map for UV representations. Using this volume model as a common 3D framework, we compiled and analyzed the spatial distributions of more than 200 single M6-specific local neurons (M6-LNs). Based on the segregation of putative dendrites and axons, these local neurons were classified into two families, directional and nondirectional. Neurotransmitter immunostaining suggested a signal routing model in which some visual information is relayed by directional M6-LNs from the anterior to the posterior M6 and all visual information is inhibited by a diverse population of nondirectional M6-LNs covering the entire M6 stratum. Our findings suggest that the Drosophila medulla M6 stratum contains diverse LNs that form repeating functional modules similar to those found in the vertebrate inner plexiform layer. © 2014 Wiley Periodicals, Inc.

  11. Context-dependent spatially periodic activity in the human entorhinal cortex

    PubMed Central

    Nguyen, T. Peter; Török, Ágoston; Shen, Jason Y.; Briggs, Deborah E.; Modur, Pradeep N.; Buchanan, Robert J.

    2017-01-01

    The spatially periodic activity of grid cells in the entorhinal cortex (EC) of the rodent, primate, and human provides a coordinate system that, together with the hippocampus, informs an individual of its location relative to the environment and encodes the memory of that location. Among the most defining features of grid-cell activity are the 60° rotational symmetry of grids and preservation of grid scale across environments. Grid cells, however, do display a limited degree of adaptation to environments. It remains unclear if this level of environment invariance generalizes to human grid-cell analogs, where the relative contribution of visual input to the multimodal sensory input of the EC is significantly larger than in rodents. Patients diagnosed with nontractable epilepsy who were implanted with entorhinal cortical electrodes performing virtual navigation tasks to memorized locations enabled us to investigate associations between grid-like patterns and environment. Here, we report that the activity of human entorhinal cortical neurons exhibits adaptive scaling in grid period, grid orientation, and rotational symmetry in close association with changes in environment size, shape, and visual cues, suggesting scale invariance of the frequency, rather than the wavelength, of spatially periodic activity. Our results demonstrate that neurons in the human EC represent space with an enhanced flexibility relative to neurons in rodents because they are endowed with adaptive scalability and context dependency. PMID:28396399

  12. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  13. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  14. Visual colorimetric detection of tin(II) and nitrite using a molybdenum oxide nanomaterial-based three-input logic gate.

    PubMed

    Du, Jiayan; Zhao, Mengxin; Huang, Wei; Deng, Yuequan; He, Yi

    2018-05-09

    We report a molybdenum oxide (MoO 3 ) nanomaterial-based three-input logic gate that uses Sn 2+ , NO 2 - , and H + ions as inputs. Under acidic conditions, Sn 2+ is able to reduce MoO 3 nanosheets, generating oxygen-vacancy-rich MoO 3-x nanomaterials along with strong localized surface plasmon resonance (LSPR) and an intense blue solution as the output signal. When NO 2 - is introduced, the redox reaction between the MoO 3 nanosheets and Sn 2+ is strongly inhibited because the NO 2 - consumes both H + and Sn 2+ . The three-input logic gate was employed for the visual colorimetric detection of Sn 2+ and NO 2 - under different input states. The colorimetric assay's limit of detection for Sn 2+ and the lowest concentration of NO 2 - detectable by the assay were found to be 27.5 nM and 0.1 μM, respectively. The assay permits the visual detection of Sn 2+ and NO 2 - down to concentrations as low as 2 μM and 25 μM, respectively. The applicability of the logic-gate-based colorimetric assay was demonstrated by using it to detect Sn 2+ and NO 2 - in several water sources.

  15. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    PubMed

    Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  16. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice

    PubMed Central

    Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed. PMID:27410964

  17. Visual Processing: Hungry Like the Mouse.

    PubMed

    Piscopo, Denise M; Niell, Cristopher M

    2016-09-07

    In this issue of Neuron, Burgess et al. (2016) explore how motivational state interacts with visual processing, by examining hunger modulation of food-associated visual responses in postrhinal cortical neurons and their inputs from amygdala. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. 77 FR 20005 - Solicitation of Input From Stakeholders Regarding the Proposed Crop Protection Competitive Grants...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-03

    ... DEPARTMENT OF AGRICULTURE National Institute of Food and Agriculture Solicitation of Input From... Food and Agriculture, USDA. ACTION: Notice of public meeting and request for stakeholder input. SUMMARY... held by conference call (audio) and internet (visual only). Connection details for those meetings will...

  19. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777

  20. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    PubMed

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  1. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    PubMed Central

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  2. The horizontal brain slice preparation: a novel approach for visualizing and recording from all layers of the tadpole tectum.

    PubMed

    Hamodi, Ali S; Pratt, Kara G

    2015-01-01

    The Xenopus tadpole optic tectum is a multisensory processing center that receives direct visual input as well as nonvisual mechanosensory input. The tectal neurons that comprise the optic tectum are organized into layers. These neurons project their dendrites laterally into the neuropil where visual inputs target the distal region of the dendrite and nonvisual inputs target the proximal region of the same dendrite. The Xenopus tadpole tectum is a popular model to study the development of sensory circuits. However, whole cell patch-clamp electrophysiological studies of the tadpole tectum (using the whole brain or in vivo preparations) have focused solely on the deep-layer tectal neurons because only neurons of the deep layer are visible and accessible for whole cell electrophysiological recordings. As a result, whereas the development and plasticity of these deep-layer neurons has been well-studied, essentially nothing has been reported about the electrophysiology of neurons residing beyond this layer. Hence, there exists a large gap in our understanding about the functional development of the amphibian tectum as a whole. To remedy this, we developed a novel isolated brain preparation that allows visualizing and recording from all layers of the tectum. We refer to this preparation as the "horizontal brain slice preparation." Here, we describe the preparation method and illustrate how it can be used to characterize the electrophysiology of neurons across all of the layers of the tectum as well as the spatial pattern of synaptic input from the different sensory modalities. Copyright © 2015 the American Physiological Society.

  3. Biasing the brain's attentional set: I. cue driven deployments of intersensory selective attention.

    PubMed

    Foxe, John J; Simpson, Gregory V; Ahlfors, Seppo P; Saron, Clifford D

    2005-10-01

    Brain activity associated with directing attention to one of two possible sensory modalities was examined using high-density mapping of human event-related potentials. The deployment of selective attention was based on visually presented symbolic cue-words instructing subjects on a trial-by-trial basis, which sensory modality to attend. We measured the spatio-temporal pattern of activation in the approximately 1 second period between the cue-instruction and a subsequent compound auditory-visual imperative stimulus. This allowed us to assess the flow of processing across brain regions involved in deploying and sustaining inter-sensory selective attention, prior to the actual selective processing of the compound audio-visual target stimulus. Activity over frontal and parietal areas showed sensory specific increases in activation during the early part of the anticipatory period (~230 ms), probably representing the activation of fronto-parietal attentional deployment systems for top-down control of attention. In the later period preceding the arrival of the "to-be-attended" stimulus, sustained differential activity was seen over fronto-central regions and parieto-occipital regions, suggesting the maintenance of sensory-specific biased attentional states that would allow for subsequent selective processing. Although there was clear sensory biasing in this late sustained period, it was also clear that both sensory systems were being prepared during the cue-target period. These late sensory-specific biasing effects were also accompanied by sustained activations over frontal cortices that also showed both common and sensory specific activation patterns, suggesting that maintenance of the biased state includes top-down inputs from generators in frontal cortices, some of which are sensory-specific regions. These data support extensive interactions between sensory, parietal and frontal regions during processing of cue information, deployment of attention, and maintenance of the focus of attention in anticipation of impending attentionally relevant input.

  4. Emergence of Orientation Selectivity in the Mammalian Visual Pathway

    PubMed Central

    Scholl, Benjamin; Tan, Andrew Y. Y.; Corey, Joseph

    2013-01-01

    Orientation selectivity is a property of mammalian primary visual cortex (V1) neurons, yet its emergence along the visual pathway varies across species. In carnivores and primates, elongated receptive fields first appear in V1, whereas in lagomorphs such receptive fields emerge earlier, in the retina. Here we examine the mouse visual pathway and reveal the existence of orientation selectivity in lateral geniculate nucleus (LGN) relay cells. Cortical inactivation does not reduce this orientation selectivity, indicating that cortical feedback is not its source. Orientation selectivity is similar for LGN relay cells spiking and subthreshold input to V1 neurons, suggesting that cortical orientation selectivity is inherited from the LGN in mouse. In contrast, orientation selectivity of cat LGN relay cells is small relative to subthreshold inputs onto V1 simple cells. Together, these differences show that although orientation selectivity exists in visual neurons of both rodents and carnivores, its emergence along the visual pathway, and thus its underlying neuronal circuitry, is fundamentally different. PMID:23804085

  5. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    PubMed

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  6. Age-Related Differences in Cortical and Subcortical Activities during Observation and Motor Imagery of Dynamic Postural Tasks: An fMRI Study.

    PubMed

    Mouthon, A; Ruffieux, J; Mouthon, M; Hoogewoud, H-M; Annoni, J-M; Taube, W

    2018-01-01

    Age-related changes in brain activation other than in the primary motor cortex are not well known with respect to dynamic balance control. Therefore, the current study aimed to explore age-related differences in the control of static and dynamic postural tasks using fMRI during mental simulation of balance tasks. For this purpose, 16 elderly (72 ± 5 years) and 16 young adults (27 ± 5 years) were asked to mentally simulate a static and a dynamic balance task by motor imagery (MI), action observation (AO), or the combination of AO and MI (AO + MI). Age-related differences were detected in the form of larger brain activations in elderly compared to young participants, especially in the challenging dynamic task when applying AO + MI. Interestingly, when MI (no visual input) was contrasted to AO (visual input), elderly participants revealed deactivation of subcortical areas. The finding that the elderly demonstrated overactivation in mostly cortical areas in challenging postural conditions with visual input (AO + MI and AO) but deactivation in subcortical areas during MI (no vision) may indicate that elderly individuals allocate more cortical resources to the internal representation of dynamic postural tasks. Furthermore, it might be assumed that they depend more strongly on visual input to activate subcortical internal representations.

  7. Age-Related Differences in Cortical and Subcortical Activities during Observation and Motor Imagery of Dynamic Postural Tasks: An fMRI Study

    PubMed Central

    Ruffieux, J.; Mouthon, M.; Hoogewoud, H.-M.; Taube, W.

    2018-01-01

    Age-related changes in brain activation other than in the primary motor cortex are not well known with respect to dynamic balance control. Therefore, the current study aimed to explore age-related differences in the control of static and dynamic postural tasks using fMRI during mental simulation of balance tasks. For this purpose, 16 elderly (72 ± 5 years) and 16 young adults (27 ± 5 years) were asked to mentally simulate a static and a dynamic balance task by motor imagery (MI), action observation (AO), or the combination of AO and MI (AO + MI). Age-related differences were detected in the form of larger brain activations in elderly compared to young participants, especially in the challenging dynamic task when applying AO + MI. Interestingly, when MI (no visual input) was contrasted to AO (visual input), elderly participants revealed deactivation of subcortical areas. The finding that the elderly demonstrated overactivation in mostly cortical areas in challenging postural conditions with visual input (AO + MI and AO) but deactivation in subcortical areas during MI (no vision) may indicate that elderly individuals allocate more cortical resources to the internal representation of dynamic postural tasks. Furthermore, it might be assumed that they depend more strongly on visual input to activate subcortical internal representations. PMID:29675037

  8. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  9. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  10. Mechanisms of inhibition in cat visual cortex.

    PubMed Central

    Berman, N J; Douglas, R J; Martin, K A; Whitteridge, D

    1991-01-01

    1. Neurones from layers 2-6 of the cat primary visual cortex were studied using extracellular and intracellular recordings made in vivo. The aim was to identify inhibitory events and determine whether they were associated with small or large (shunting) changes in the input conductance of the neurones. 2. Visual stimulation of subfields of simple receptive fields produced depolarizing or hyperpolarizing potentials that were associated with increased or decreased firing rates respectively. Hyperpolarizing potentials were small, 5 mV or less. In the same neurones, brief electrical stimulation of cortical afferents produced a characteristic sequence of a brief depolarization followed by a long-lasting (200-400 ms) hyperpolarization. 3. During the response to a stationary flashed bar, the synaptic activation increased the input conductance of the neurone by about 5-20%. Conductance changes of similar magnitude were obtained by electrically stimulating the neurone. Neurones stimulated with non-optimal orientations or directions of motion showed little change in input conductance. 4. These data indicate that while visually or electrically induced inhibition can be readily demonstrated in visual cortex, the inhibition is not associated with large sustained conductance changes. Thus a shunting or multiplicative inhibitory mechanism is not the principal mechanism of inhibition. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 PMID:1804983

  11. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes.

    PubMed

    Mbagwu, Michael; French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-05-04

    Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org.

  12. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes

    PubMed Central

    French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-01-01

    Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org. PMID:27146002

  13. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  14. Influence of moving visual environment on sit-to-stand kinematics in children and adults.

    PubMed

    Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A

    2009-08-01

    The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.

  15. The intralaminar thalamus—an expressway linking visual stimuli to circuits determining agency and action selection

    PubMed Central

    Fisher, Simon D.; Reynolds, John N. J.

    2014-01-01

    Anatomical investigations have revealed connections between the intralaminar thalamic nuclei and areas such as the superior colliculus (SC) that receive short latency input from visual and auditory primary sensory areas. The intralaminar nuclei in turn project to the major input nucleus of the basal ganglia, the striatum, providing this nucleus with a source of subcortical excitatory input. Together with a converging input from the cerebral cortex, and a neuromodulatory dopaminergic input from the midbrain, the components previously found necessary for reinforcement learning in the basal ganglia are present. With this intralaminar sensory input, the basal ganglia are thought to play a primary role in determining what aspect of an organism’s own behavior has caused salient environmental changes. Additionally, subcortical loops through thalamic and basal ganglia nuclei are proposed to play a critical role in action selection. In this mini review we will consider the anatomical and physiological evidence underlying the existence of these circuits. We will propose how the circuits interact to modulate basal ganglia output and solve common behavioral learning problems of agency determination and action selection. PMID:24765070

  16. Video time encoding machines.

    PubMed

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  17. Video Time Encoding Machines

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value. PMID:21296708

  18. Linearly Additive Shape and Color Signals in Monkey Inferotemporal Cortex

    PubMed Central

    McMahon, David B. T.; Olson, Carl R.

    2009-01-01

    How does the brain represent a red circle? One possibility is that there is a specialized and possibly time-consuming process whereby the attributes of shape and color, carried by separate populations of neurons in low-order visual cortex, are bound together into a unitary neural representation. Another possibility is that neurons in high-order visual cortex are selective, by virtue of their bottom-up input from low-order visual areas, for particular conjunctions of shape and color. A third possibility is that they simply sum shape and color signals linearly. We tested these ideas by measuring the responses of inferotemporal cortex neurons to sets of stimuli in which two attributes—shape and color—varied independently. We find that a few neurons exhibit conjunction selectivity but that in most neurons the influences of shape and color sum linearly. Contrary to the idea of conjunction coding, few neurons respond selectively to a particular combination of shape and color. Contrary to the idea that binding requires time, conjunction signals, when present, occur as early as feature signals. We argue that neither conjunction selectivity nor a specialized feature binding process is necessary for the effective representation of shape–color combinations. PMID:19144745

  19. Linearly additive shape and color signals in monkey inferotemporal cortex.

    PubMed

    McMahon, David B T; Olson, Carl R

    2009-04-01

    How does the brain represent a red circle? One possibility is that there is a specialized and possibly time-consuming process whereby the attributes of shape and color, carried by separate populations of neurons in low-order visual cortex, are bound together into a unitary neural representation. Another possibility is that neurons in high-order visual cortex are selective, by virtue of their bottom-up input from low-order visual areas, for particular conjunctions of shape and color. A third possibility is that they simply sum shape and color signals linearly. We tested these ideas by measuring the responses of inferotemporal cortex neurons to sets of stimuli in which two attributes-shape and color-varied independently. We find that a few neurons exhibit conjunction selectivity but that in most neurons the influences of shape and color sum linearly. Contrary to the idea of conjunction coding, few neurons respond selectively to a particular combination of shape and color. Contrary to the idea that binding requires time, conjunction signals, when present, occur as early as feature signals. We argue that neither conjunction selectivity nor a specialized feature binding process is necessary for the effective representation of shape-color combinations.

  20. The Influence of Averageness on Adults' Perceptions of Attractiveness: The Effect of Early Visual Deprivation.

    PubMed

    Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda

    2016-08-03

    Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.

  1. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    PubMed

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.

  2. Imer-product array processor for retrieval of stored images represented by bipolar binary (+1,-1) pixels using partial input trinary pixels represented by (+1,-1)

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)

    1993-01-01

    An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.

  3. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  4. Overview and Software Architecture of the Copernicus Trajectory Design and Optimization System

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Mathur, Ravi; Davis, Elizabeth C.

    2010-01-01

    The Copernicus Trajectory Design and Optimization System represents an innovative and comprehensive approach to on-orbit mission design, trajectory analysis and optimization. Copernicus integrates state of the art algorithms in optimization, interactive visualization, spacecraft state propagation, and data input-output interfaces, allowing the analyst to design spacecraft missions to all possible Solar System destinations. All of these features are incorporated within a single architecture that can be used interactively via a comprehensive GUI interface, or passively via external interfaces that execute batch processes. This paper describes the Copernicus software architecture together with the challenges associated with its implementation. Additionally, future development and planned new capabilities are discussed. Key words: Copernicus, Spacecraft Trajectory Optimization Software.

  5. Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs

    PubMed Central

    McFarland, James M.; Cui, Yuwei; Butts, Daniel A.

    2013-01-01

    The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185

  6. Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data

    PubMed Central

    Bergeron, Bryan P.; Greenes, Robert A.

    1987-01-01

    Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.

  7. Layer-specific input to distinct cell types in layer 6 of monkey primary visual cortex.

    PubMed

    Briggs, F; Callaway, E M

    2001-05-15

    Layer 6 of monkey V1 contains a physiologically and anatomically diverse population of excitatory pyramidal neurons. Distinctive arborization patterns of axons and dendrites within the functionally specialized cortical layers define eight types of layer 6 pyramidal neurons and suggest unique information processing roles for each cell type. To address how input sources contribute to cellular function, we examined the laminar sources of functional excitatory input onto individual layer 6 pyramidal neurons using scanning laser photostimulation. We find that excitatory input sources correlate with cell type. Class I neurons with axonal arbors selectively targeting magnocellular (M) recipient layer 4Calpha receive input from M-dominated layer 4B, whereas class I neurons whose axonal arbors target parvocellular (P) recipient layer 4Cbeta receive input from P-dominated layer 2/3. Surprisingly, these neuronal types do not differ significantly in the inputs they receive directly from layers 4Calpha or 4Cbeta. Class II cells, which lack dense axonal arbors within layer 4C, receive excitatory input from layers targeted by their local axons. Specifically, type IIA cells project axons to and receive input from the deep but not superficial layers. Type IIB neurons project to and receive input from the deepest and most superficial, but not middle layers. Type IIC neurons arborize throughout the cortical layers and tend to receive inputs from all cortical layers. These observations have implications for the functional roles of different layer 6 cell types in visual information processing.

  8. Sleep Disturbances among Persons Who Are Visually Impaired: Survey of Dog Guide Users.

    ERIC Educational Resources Information Center

    Fouladi, Massoud K.; Moseley, Merrick J.; Jones, Helen S.; Tobin, Michael J.

    1998-01-01

    A survey completed by 1237 adults with severe visual impairments found that 20% described the quality of their sleep as poor or very poor. Exercise was associated with better sleep and depression with poorer sleep. However, visual acuity did not predict sleep quality, casting doubt on the idea that restricted visual input (light) causes sleep…

  9. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  10. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA

    2012-03-06

    A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  11. Locomotor Sensory Organization Test: How Sensory Conflict Affects the Temporal Structure of Sway Variability During Gait.

    PubMed

    Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas

    2016-05-01

    When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.

  12. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  13. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  14. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  15. The strength of attentional biases reduces as visual short-term memory load increases

    PubMed Central

    Shimi, A.

    2013-01-01

    Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694

  16. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  17. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  18. Surfing a spike wave down the ventral stream.

    PubMed

    VanRullen, Rufin; Thorpe, Simon J

    2002-10-01

    Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.

  19. SDBI 1904: Human Factors Assessment of Vibration Effects on Visual Performance during Launch

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby G.; Holden, Kritina; Root, Phillip; Ebert, Douglas; Jones, Jeffery; Adelstein, Bernard

    2009-01-01

    The primary objective of the of Human Factors Short Duration Bioastronautics Investigation (SDBI) 1904 is to determine visual performance limits during operational vibration and g-loads, specifically through the determination of minimal usable font sized using Orion-type display formats. Currently there is little to no data available to quantify human visual performance under these extreme conditions. Existing data on shuttle vibration magnitude and frequency is incomplete, does not address sear and crew vibration in the current configuration, and does not address human visual performance. There have been anecdotal reports of performance decrements from shuttle crews, but no structured data has been collected. The SDBI is a companion effort to the Detailed Test Objective (DTO) 695, which will measure shuttle seat accelerations (vibration) during ascent. Data fro the SDBI will serve an important role in interpreting the DTO vibration data. This data will be collected during the ascent phase of three shuttle missions (STS-119, 127, and 128). Both SDBI1904 and DTO 695 are low impact with respect to flight resources, and combined they represent an efficient and focused problem solving approach. The SDBI and DTO data will be correlated to determine the nature of perceived visual performance under varying vibrations and g-loads. This project will provide: 1) Immediate data for developing preliminary human performance vibration requirements; 2) Flight validated inputs for ongoing and future ground-based research; and 3) Information of functional needs that will drive Orion display format design decisions.

  20. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  1. Temporal and spatial localization of prediction-error signals in the visual brain.

    PubMed

    Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W

    2017-04-01

    It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Distinctive signatures of recursion.

    PubMed

    Martins, Maurício Dias

    2012-07-19

    Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed.

  3. Using CLIPS to represent knowledge in a VR simulation

    NASA Technical Reports Server (NTRS)

    Engelberg, Mark L.

    1994-01-01

    Virtual reality (VR) is an exciting use of advanced hardware and software technologies to achieve an immersive simulation. Until recently, the majority of virtual environments were merely 'fly-throughs' in which a user could freely explore a 3-dimensional world or a visualized dataset. Now that the underlying technologies are reaching a level of maturity, programmers are seeking ways to increase the complexity and interactivity of immersive simulations. In most cases, interactivity in a virtual environment can be specified in the form 'whenever such-and-such happens to object X, it reacts in the following manner.' CLIPS and COOL provide a simple and elegant framework for representing this knowledge-base in an efficient manner that can be extended incrementally. The complexity of a detailed simulation becomes more manageable when the control flow is governed by CLIPS' rule-based inference engine as opposed to by traditional procedural mechanisms. Examples in this paper will illustrate an effective way to represent VR information in CLIPS, and to tie this knowledge base to the input and output C routines of a typical virtual environment.

  4. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  5. Neuron analysis of visual perception

    NASA Technical Reports Server (NTRS)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  6. Violent Interaction Detection in Video Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  7. Surface- and Contour-Preserving Origamic Architecture Paper Pop-Ups.

    PubMed

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2013-08-02

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces and volume. The designs have also been shown to more closely resemble those created by real artists.

  8. Surface and contour-preserving origamic architecture paper pop-ups.

    PubMed

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2014-02-01

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces, and volume. The designs have also been shown to more closely resemble those created by real artists.

  9. Neuronal ensemble for visual working memory via interplay of slow and fast oscillations.

    PubMed

    Mizuhara, Hiroaki; Yamaguchi, Yoko

    2011-05-01

    The current focus of studies on neural entities for memory maintenance is on the interplay between fast neuronal oscillations in the gamma band and slow oscillations in the theta or delta band. The hierarchical coupling of slow and fast oscillations is crucial for the rehearsal of sensory inputs for short-term storage, as well as for binding sensory inputs that are represented in spatially segregated cortical areas. However, no experimental evidence for the binding of spatially segregated information has yet been presented for memory maintenance in humans. In the present study, we actively manipulated memory maintenance performance with an attentional blink procedure during human scalp electroencephalography (EEG) recordings and identified that slow oscillations are enhanced when memory maintenance is successful. These slow oscillations accompanied fast oscillations in the gamma frequency range that appeared at spatially segregated scalp sites. The amplitude of the gamma oscillation at these scalp sites was simultaneously enhanced at an EEG phase of the slow oscillation. Successful memory maintenance appears to be achieved by a rehearsal of sensory inputs together with a coordination of distributed fast oscillations at a preferred timing of the slow oscillations. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  10. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  11. Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade

    2013-05-01

    As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.

  12. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  13. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  14. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Functional connectivity of visual cortex in the blind follows retinotopic organization principles

    PubMed Central

    Ovadia-Caro, Smadar; Caramazza, Alfonso; Margulies, Daniel S.; Villringer, Arno

    2015-01-01

    Is visual input during critical periods of development crucial for the emergence of the fundamental topographical mapping of the visual cortex? And would this structure be retained throughout life-long blindness or would it fade as a result of plastic, use-based reorganization? We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any visual experience and be retained through life-long experience-dependent plasticity. Furthermore, retinotopic divisions of labour, such as that between the visual cortex regions normally representing the fovea and periphery, also form the basis for topographically-unique plastic changes in the blind. PMID:25869851

  16. Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank; Malcolm, George L.

    2016-01-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…

  17. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  18. Adding sound to theory of mind: Comparing children's development of mental-state understanding in the auditory and visual realms.

    PubMed

    Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L

    2017-12-01

    Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Visual BOLD Response in Late Blind Subjects with Argus II Retinal Prosthesis

    PubMed Central

    Castaldi, E.; Cicchini, G. M.; Cinelli, L.; Rizzo, S.; Morrone, M. C.

    2016-01-01

    Retinal prosthesis technologies require that the visual system downstream of the retinal circuitry be capable of transmitting and elaborating visual signals. We studied the capability of plastic remodeling in late blind subjects implanted with the Argus II Retinal Prosthesis with psychophysics and functional MRI (fMRI). After surgery, six out of seven retinitis pigmentosa (RP) blind subjects were able to detect high-contrast stimuli using the prosthetic implant. However, direction discrimination to contrast modulated stimuli remained at chance level in all of them. No subject showed any improvement of contrast sensitivity in either eye when not using the Argus II. Before the implant, the Blood Oxygenation Level Dependent (BOLD) activity in V1 and the lateral geniculate nucleus (LGN) was very weak or absent. Surprisingly, after prolonged use of Argus II, BOLD responses to visual input were enhanced. This is, to our knowledge, the first study tracking the neural changes of visual areas in patients after retinal implant, revealing a capacity to respond to restored visual input even after years of deprivation. PMID:27780207

  20. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  1. Statistical-mechanical analysis of self-organization and pattern formation during the development of visual maps

    NASA Astrophysics Data System (ADS)

    Obermayer, K.; Blasdel, G. G.; Schulten, K.

    1992-05-01

    We report a detailed analytical and numerical model study of pattern formation during the development of visual maps, namely, the formation of topographic maps and orientation and ocular dominance columns in the striate cortex. Pattern formation is described by a stimulus-driven Markovian process, the self-organizing feature map. This algorithm generates topologically correct maps between a space of (visual) input signals and an array of formal ``neurons,'' which in our model represents the cortex. We define order parameters that are a function of the set of visual stimuli an animal perceives, and we demonstrate that the formation of orientation and ocular dominance columns is the result of a global instability of the retinoptic projection above a critical value of these order parameters. We characterize the spatial structure of the emerging patterns by power spectra, correlation functions, and Gabor transforms, and we compare model predictions with experimental data obtained from the striate cortex of the macaque monkey with optical imaging. Above the critical value of the order parameters the model predicts a lateral segregation of the striate cortex into (i) binocular regions with linear changes in orientation preference, where iso-orientation slabs run perpendicular to the ocular dominance bands, and (ii) monocular regions with low orientation specificity, which contain the singularities of the orientation map. Some of these predictions have already been verified by experiments.

  2. Diversity and wiring variability of visual local neurons in the Drosophila medulla M6 stratum

    PubMed Central

    Chin, An-Lun; Lin, Chih-Yung; Fu, Tsai-Feng; Dickson, Barry J; Chiang, Ann-Shyn

    2014-01-01

    Local neurons in the vertebrate retina are instrumental in transforming visual inputs to extract contrast, motion, and color information and in shaping bipolar-to-ganglion cell transmission to the brain. In Drosophila, UV vision is represented by R7 inner photoreceptor neurons that project to the medulla M6 stratum, with relatively little known of this downstream substrate. Here, using R7 terminals as references, we generated a 3D volume model of the M6 stratum, which revealed a retinotopic map for UV representations. Using this volume model as a common 3D framework, we compiled and analyzed the spatial distributions of more than 200 single M6-specific local neurons (M6-LNs). Based on the segregation of putative dendrites and axons, these local neurons were classified into two families, directional and nondirectional. Neurotransmitter immunostaining suggested a signal routing model in which some visual information is relayed by directional M6-LNs from the anterior to the posterior M6 and all visual information is inhibited by a diverse population of nondirectional M6-LNs covering the entire M6 stratum. Our findings suggest that the Drosophila medulla M6 stratum contains diverse LNs that form repeating functional modules similar to those found in the vertebrate inner plexiform layer. J. Comp. Neurol. 522:3795–3816, 2014. © 2014 Wiley Periodicals, Inc. PMID:24782245

  3. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  4. Intrinsic-Signal Optical Imaging Reveals Cryptic Ocular Dominance Columns in Primary Visual Cortex of New World Owl Monkeys

    PubMed Central

    Kaskan, Peter M.; Lu, Haidong D.; Dillenburger, Barbara C.; Roe, Anna W.; Kaas, Jon H.

    2007-01-01

    A significant concept in neuroscience is that sensory areas of the neocortex have evolved the remarkable ability to represent a number of stimulus features within the confines of a global map of the sensory periphery. Modularity, the term often used to describe the inhomogeneous nature of the neocortex, is without a doubt an important organizational principle of early sensory areas, such as the primary visual cortex (V1). Ocular dominance columns, one type of module in V1, are found in many primate species as well as in carnivores. Yet, their variable presence in some New World monkey species and complete absence in other species has been enigmatic. Here, we demonstrate that optical imaging reveals the presence of ocular dominance columns in the superficial layers of V1 of owl monkeys (Aotus trivirgatus), even though the geniculate inputs related to each eye are highly overlapping in layer 4. The ocular dominance columns in owl monkeys revealed by optical imaging are circular in appearance. The distance between left eye centers and right eye centers is approximately 650 μm. We find no relationship between ocular dominance centers and other modular organizational features such as orientation pinwheels or the centers of the cytochrome oxidase blobs. These results are significant because they suggest that functional columns may exist in the absence of obvious differences in the distributions of activating inputs and ocular dominance columns may be more widely distributed across mammalian taxa than commonly suggested. PMID:18974855

  5. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  6. Neuroplasticity and amblyopia: vision at the balance point.

    PubMed

    Tailor, Vijay K; Schwarzkopf, D Samuel; Dahlmann-Noor, Annegret H

    2017-02-01

    New insights into triggers and brakes of plasticity in the visual system are being translated into new treatment approaches which may improve outcomes not only in children, but also in adults. Visual experience-driven plasticity is greatest in early childhood, triggered by maturation of inhibitory interneurons which facilitate strengthening of synchronous synaptic connections, and inactivation of others. Normal binocular development leads to progressive refinement of monocular visual acuity, stereoacuity and fusion of images from both eyes. At the end of the 'critical period', structural and functional brakes such as dampening of acetylcholine receptor signalling and formation of perineuronal nets limit further synaptic remodelling. Imbalanced visual input from the two eyes can lead to imbalanced neural processing and permanent visual deficits, the commonest of which is amblyopia. The efficacy of new behavioural, physical and pharmacological interventions aiming to balance visual input and visual processing have been described in humans, and some are currently under evaluation in randomised controlled trials. Outcomes may change amblyopia treatment for children and adults, but the safety of new approaches will need careful monitoring, as permanent adverse events may occur when plasticity is re-induced after the end of the critical period.Video abstracthttp://links.lww.com/CONR/A42.

  7. Reasoning over taxonomic change: exploring alignments for the Perelleschus use case.

    PubMed

    Franz, Nico M; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2015-01-01

    Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations.

  8. Excitatory synaptic inputs to mouse on-off direction-selective retinal ganglion cells lack direction tuning.

    PubMed

    Park, Silvia J H; Kim, In-Jung; Looger, Loren L; Demb, Jonathan B; Borghuis, Bart G

    2014-03-12

    Direction selectivity represents a fundamental visual computation. In mammalian retina, On-Off direction-selective ganglion cells (DSGCs) respond strongly to motion in a preferred direction and weakly to motion in the opposite, null direction. Electrical recordings suggested three direction-selective (DS) synaptic mechanisms: DS GABA release during null-direction motion from starburst amacrine cells (SACs) and DS acetylcholine and glutamate release during preferred direction motion from SACs and bipolar cells. However, evidence for DS acetylcholine and glutamate release has been inconsistent and at least one bipolar cell type that contacts another DSGC (On-type) lacks DS release. Here, whole-cell recordings in mouse retina showed that cholinergic input to On-Off DSGCs lacked DS, whereas the remaining (glutamatergic) input showed apparent DS. Fluorescence measurements with the glutamate biosensor intensity-based glutamate-sensing fluorescent reporter (iGluSnFR) conditionally expressed in On-Off DSGCs showed that glutamate release in both On- and Off-layer dendrites lacked DS, whereas simultaneously recorded excitatory currents showed apparent DS. With GABA-A receptors blocked, both iGluSnFR signals and excitatory currents lacked DS. Our measurements rule out DS release from bipolar cells onto On-Off DSGCs and support a theoretical model suggesting that apparent DS excitation in voltage-clamp recordings results from inadequate voltage control of DSGC dendrites during null-direction inhibition. SAC GABA release is the apparent sole source of DS input onto On-Off DSGCs.

  9. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  10. Virtual Earth System Laboratory (VESL): Effective Visualization of Earth System Data and Process Simulations

    NASA Astrophysics Data System (ADS)

    Quinn, J. D.; Larour, E. Y.; Cheng, D. L. C.; Halkides, D. J.

    2016-12-01

    The Virtual Earth System Laboratory (VESL) is a Web-based tool, under development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. It contains features geared toward a range of applications, spanning research and outreach. It offers an intuitive user interface, in which model inputs are changed using sliders and other interactive components. Current capabilities include simulation of polar ice sheet responses to climate forcing, based on NASA's Ice Sheet System Model (ISSM). We believe that the visualization of data is most effective when tailored to the target audience, and that many of the best practices for modern Web design/development can be applied directly to the visualization of data: use of negative space, color schemes, typography, accessibility standards, tooltips, etc cetera. We present our prototype website, and invite input from potential users, including researchers, educators, and students.

  11. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  13. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  14. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  15. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  16. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Inhibition to excitation ratio regulates visual system responses and behavior in vivo.

    PubMed

    Shen, Wanhua; McKeown, Caroline R; Demas, James A; Cline, Hollis T

    2011-11-01

    The balance of inhibitory to excitatory (I/E) synaptic inputs is thought to control information processing and behavioral output of the central nervous system. We sought to test the effects of the decreased or increased I/E ratio on visual circuit function and visually guided behavior in Xenopus tadpoles. We selectively decreased inhibitory synaptic transmission in optic tectal neurons by knocking down the γ2 subunit of the GABA(A) receptors (GABA(A)R) using antisense morpholino oligonucleotides or by expressing a peptide corresponding to an intracellular loop of the γ2 subunit, called ICL, which interferes with anchoring GABA(A)R at synapses. Recordings of miniature inhibitory postsynaptic currents (mIPSCs) and miniature excitatory PSCs (mEPSCs) showed that these treatments decreased the frequency of mIPSCs compared with control tectal neurons without affecting mEPSC frequency, resulting in an ∼50% decrease in the ratio of I/E synaptic input. ICL expression and γ2-subunit knockdown also decreased the ratio of optic nerve-evoked synaptic I/E responses. We recorded visually evoked responses from optic tectal neurons, in which the synaptic I/E ratio was decreased. Decreasing the synaptic I/E ratio in tectal neurons increased the variance of first spike latency in response to full-field visual stimulation, increased recurrent activity in the tectal circuit, enlarged spatial receptive fields, and lengthened the temporal integration window. We used the benzodiazepine, diazepam (DZ), to increase inhibitory synaptic activity. DZ increased optic nerve-evoked inhibitory transmission but did not affect evoked excitatory currents, resulting in an increase in the I/E ratio of ∼30%. Increasing the I/E ratio with DZ decreased the variance of first spike latency, decreased spatial receptive field size, and lengthened temporal receptive fields. Sequential recordings of spikes and excitatory and inhibitory synaptic inputs to the same visual stimuli demonstrated that decreasing or increasing the I/E ratio disrupted input/output relations. We assessed the effect of an altered I/E ratio on a visually guided behavior that requires the optic tectum. Increasing and decreasing I/E in tectal neurons blocked the tectally mediated visual avoidance behavior. Because ICL expression, γ2-subunit knockdown, and DZ did not directly affect excitatory synaptic transmission, we interpret the results of our study as evidence that partially decreasing or increasing the ratio of I/E disrupts several measures of visual system information processing and visually guided behavior in an intact vertebrate.

  18. A common evolutionary origin for the ON- and OFF-edge motion detection pathways of the Drosophila visual system

    PubMed Central

    Shinomiya, Kazunori; Takemura, Shin-ya; Rivlin, Patricia K.; Plaza, Stephen M.; Scheffer, Louis K.; Meinertzhagen, Ian A.

    2015-01-01

    Synaptic circuits for identified behaviors in the Drosophila brain have typically been considered from either a developmental or functional perspective without reference to how the circuits might have been inherited from ancestral forms. For example, two candidate pathways for ON- and OFF-edge motion detection in the visual system act via circuits that use respectively either T4 or T5, two cell types of the fourth neuropil, or lobula plate (LOP), that exhibit narrow-field direction-selective responses and provide input to wide-field tangential neurons. T4 or T5 both have four subtypes that terminate one each in the four strata of the LOP. Representatives are reported in a wide range of Diptera, and both cell types exhibit various similarities in: (1) the morphology of their dendritic arbors; (2) their four morphological and functional subtypes; (3) their cholinergic profile in Drosophila; (4) their input from the pathways of L3 cells in the first neuropil, or lamina (LA), and by one of a pair of LA cells, L1 (to the T4 pathway) and L2 (to the T5 pathway); and (5) their innervation by a single, wide-field contralateral tangential neuron from the central brain. Progenitors of both also express the gene atonal early in their proliferation from the inner anlage of the developing optic lobe, being alone among many other cell type progeny to do so. Yet T4 receives input in the second neuropil, or medulla (ME), and T5 in the third neuropil or lobula (LO). Here we suggest that these two cell types were originally one, that their ancestral cell population duplicated and split to innervate separate ME and LO neuropils, and that a fiber crossing—the internal chiasma—arose between the two neuropils. The split most plausibly occurred, we suggest, with the formation of the LO as a new neuropil that formed when it separated from its ancestral neuropil to leave the ME, suggesting additionally that ME input neurons to T4 and T5 may also have had a common origin. PMID:26217193

  19. Reorganization of Visual Callosal Connections Following Alterations of Retinal Input and Brain Damage

    PubMed Central

    Restani, Laura; Caleo, Matteo

    2016-01-01

    Vision is a very important sensory modality in humans. Visual disorders are numerous and arising from diverse and complex causes. Deficits in visual function are highly disabling from a social point of view and in addition cause a considerable economic burden. For all these reasons there is an intense effort by the scientific community to gather knowledge on visual deficit mechanisms and to find possible new strategies for recovery and treatment. In this review, we focus on an important and sometimes neglected player of the visual function, the corpus callosum (CC). The CC is the major white matter structure in the brain and is involved in information processing between the two hemispheres. In particular, visual callosal connections interconnect homologous areas of visual cortices, binding together the two halves of the visual field. This interhemispheric communication plays a significant role in visual cortical output. Here, we will first review the essential literature on the physiology of the callosal connections in normal vision. The available data support the view that the callosum contributes to both excitation and inhibition to the target hemisphere, with a dynamic adaptation to the strength of the incoming visual input. Next, we will focus on data showing how callosal connections may sense visual alterations and respond to the classical paradigm for the study of visual plasticity, i.e., monocular deprivation (MD). This is a prototypical example of a model for the study of callosal plasticity in pathological conditions (e.g., strabismus and amblyopia) characterized by unbalanced input from the two eyes. We will also discuss the findings of callosal alterations in blind subjects. Noteworthy, we will discuss data showing that inter-hemispheric transfer mediates recovery of visual responsiveness following cortical damage. Finally, we will provide an overview of how callosal projections dysfunction could contribute to pathologies such as neglect and occipital epilepsy. A particular focus will be on reviewing noninvasive brain stimulation techniques and optogenetic approaches that allow to selectively manipulate callosal function and to probe its involvement in cortical processing and plasticity. Overall, the data indicate that experience can potently impact on transcallosal connectivity, and that the callosum itself is crucial for plasticity and recovery in various disorders of the visual pathway. PMID:27895559

  20. Vision drives accurate approach behavior during prey capture in laboratory mice

    PubMed Central

    Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.

    2016-01-01

    Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567

  1. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Receptive Field Vectors of Genetically-Identified Retinal Ganglion Cells Reveal Cell-Type-Dependent Visual Functions

    PubMed Central

    Katz, Matthew L.; Viney, Tim J.; Nikolic, Konstantin

    2016-01-01

    Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types. PMID:26845435

  3. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  4. Last but not least.

    PubMed

    Shapiro, Arthur G; Hamburger, Kai

    2007-01-01

    A central tenet of Gestalt psychology is that the visual scene can be separated into figure and ground. The two illusions we present demonstrate that Gestalt processes can group spatial contrast information that cuts across the figure/ground separation. This finding suggests that visual processes that organise the visual scene do not necessarily require structural segmentation as their primary input.

  5. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  6. Adaptive Changes in Early and Late Blind: A fMRI Study of Verb Generation to Heard Nouns

    PubMed Central

    BURTON, H.; SNYDER, A. Z.; DIAMOND, J. B.; RAICHLE, M. E.

    2013-01-01

    Literacy for blind people requires learning Braille. Along with others, we have shown that reading Braille activates visual cortex. This includes striate cortex (V1), i.e., banks of calcarine sulcus, and several higher visual areas in lingual, fusiform, cuneus, lateral occipital, inferior temporal, and middle temporal gyri. The spatial extent and magnitude of magnetic resonance (MR) signals in visual cortex is greatest for those who became blind early in life. Individuals who lost sight as adults, and subsequently learned Braille, still exhibited activity in some of the same visual cortex regions, especially V1. These findings suggest these visual cortex regions become adapted to processing tactile information and that this cross-modal neural change might support Braille literacy. Here we tested the alternative hypothesis that these regions directly respond to linguistic aspects of a task. Accordingly, language task performance by blind persons should activate the same visual cortex regions regardless of input modality. Specifically, visual cortex activity in blind people ought to arise during a language task involving heard words. Eight early blind, six late blind, and eight sighted subjects were studied using functional magnetic resonance imaging (fMRI) during covert generation of verbs to heard nouns. The control task was passive listening to indecipherable sounds (reverse words) matched to the nouns in sound intensity, duration, and spectral content. Functional responses were analyzed at the level of individual subjects using methods based on the general linear model and at the group level, using voxel based ANOVA and t-test analyses. Blind and sighted subjects showed comparable activation of language areas in left inferior frontal, dorsolateral prefrontal, and left posterior superior temporal gyri. The main distinction was bilateral, left dominant activation of the same visual cortex regions previously noted with Braille reading in all blind subjects. The spatial extent and magnitude of responses was greatest on the left in early blind individuals. Responses in the late blind group mostly were confined to V1 and nearby portions of the lingual and fusiform gyri. These results confirm the presence of adaptations in visual cortex of blind people but argue against the notion that this activity during Braille reading represents somatosensory (haptic) processing. Rather, we suggest that these responses can be most parsimoniously explained in terms of linguistic operations. It remains possible that these responses represent adaptations which initially are for processing either sound or touch, but which are later generalized to the other modality during acquisition of Braille reading skills. PMID:12466452

  7. Right-Brained Kids in Left-Brained Schools

    ERIC Educational Resources Information Center

    Hunter, Madeline

    1976-01-01

    Students who learn well through left hemisphere brain input (oral and written) have minimal practice in using the right hemisphere, while those who are more proficient in right hemisphere (visual) input processing are handicapped by having to use primarily their left brains. (MB)

  8. Input-dependent modulation of MEG gamma oscillations reflects gain control in the visual cortex.

    PubMed

    Orekhova, Elena V; Sysoeva, Olga V; Schneiderman, Justin F; Lundström, Sebastian; Galuta, Ilia A; Goiaeva, Dzerasa E; Prokofyev, Andrey O; Riaz, Bushra; Keeler, Courtney; Hadjikhani, Nouchine; Gillberg, Christopher; Stroganova, Tatiana A

    2018-05-31

    Gamma-band oscillations arise from the interplay between neural excitation (E) and inhibition (I) and may provide a non-invasive window into the state of cortical circuitry. A bell-shaped modulation of gamma response power by increasing the intensity of sensory input was observed in animals and is thought to reflect neural gain control. Here we sought to find a similar input-output relationship in humans with MEG via modulating the intensity of a visual stimulation by changing the velocity/temporal-frequency of visual motion. In the first experiment, adult participants observed static and moving gratings. The frequency of the MEG gamma response monotonically increased with motion velocity whereas power followed a bell-shape. In the second experiment, on a large group of children and adults, we found that despite drastic developmental changes in frequency and power of gamma oscillations, the relative suppression at high motion velocities was scaled to the same range of values across the life-span. In light of animal and modeling studies, the modulation of gamma power and frequency at high stimulation intensities characterizes the capacity of inhibitory neurons to counterbalance increasing excitation in visual networks. Gamma suppression may thus provide a non-invasive measure of inhibitory-based gain control in the healthy and diseased brain.

  9. From attentional gating in macaque primary visual cortex to dyslexia in humans.

    PubMed

    Vidyasagar, T R

    2001-01-01

    Selective attention is an important aspect of brain function that we need in coping with the immense and constant barrage of sensory information. One model of attention (Feature Integration Theory) that suggests an early selection of spatial locations of objects via an attentional spotlight would also solve the 'binding problem' (that is how do different attributes of each object get correctly bound together?). Our experiments have demonstrated modulation of specific locations of interest at the level of the primary visual cortex both in visual discrimination and memory tasks, where the actual locations of the targets was also important in being able to perform the task. It is suggested that the feedback mediating the modulation arises from the posterior parietal cortex, which would also be consistent with its known role in attentional control. In primates, the magnocellular (M) and parvocellular (P) pathways are the two major streams of inputs from the retina, carrying distinctly different types of information and they remain fairly segregated in their projections to the primary visual cortex and further into the extra-striate regions. The P inputs go mainly into the ventral (temporal) stream, while the dorsal (parietal) stream is dominated by M inputs. A theory of attentional gating is proposed here where the M dominated dorsal stream gates the P inputs into the ventral stream. This framework is used to provide a neural explanation of the processes involved in reading and in learning to read. This scheme also explains how a magnocellular deficit could cause the common reading impairment, dyslexia.

  10. The role of visuohaptic experience in visually perceived depth.

    PubMed

    Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S

    2009-06-01

    Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.

  11. Effect of rehabilitation worker input on visual function outcomes in individuals with low vision: study protocol for a randomised controlled trial.

    PubMed

    Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H

    2016-02-24

    Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.

  12. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  13. A Biophysical Neural Model To Describe Spatial Visual Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugues, Etienne; Jose, Jorge V.

    2008-02-14

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We firstmore » constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.« less

  14. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  15. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  16. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  17. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  18. Students' Visualization of Metallic Bonding and the Malleability of Metals

    NASA Astrophysics Data System (ADS)

    Cheng, Maurice M. W.; Gilbert, John K.

    2014-05-01

    This study investigated the mental representations of metallic bonding and the malleability of metals held by three male students aged 14-15 (Year 10) who were attending a Hong Kong school. One student was selected by their chemistry teacher as representing each of the highest, the medium, and the lowest level of attainment in chemistry in a school that admitted students of average general attainment. The students were interviewed and their understandings probed through their provision of drawings and their interpretation of the diagrams that had been previously used by their teacher. Dual coding theory was used to interpret the relative significance of visual and verbal input and the interaction between the two for their understanding. There was evidence that students relied on verbal recall in providing their initial understandings and showed an appreciation of the nature of the structural components of the electron-sea model of metallic bonding. However, they varied in terms of their appreciation of the electrostatic force which was responsible for the malleability of metals. The study suggests that a clearer understanding of the electrostatic force involved can be attained when students experience visual and verbal representations simultaneously, a conclusion supported by dual coding theory. Principles for good practice in using diagrams in teaching are discussed.

  19. Dynamic information processing states revealed through neurocognitive models of object semantics

    PubMed Central

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  20. Recurrent V1-V2 interaction in early visual boundary processing.

    PubMed

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  1. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  2. Representation and disconnection in imaginal neglect.

    PubMed

    Rode, G; Cotton, F; Revol, P; Jacquin-Courtois, S; Rossetti, Y; Bartolomeo, P

    2010-08-01

    Patients with neglect failure to detect, orient, or respond to stimuli from a spatially confined region, usually on their left side. Often, the presence of perceptual input increases left omissions, while sensory deprivation decreases them, possibly by removing attention-catching right-sided stimuli (Bartolomeo, 2007). However, such an influence of visual deprivation on representational neglect was not observed in patients while they were imagining a map of France (Rode et al., 2007). Therefore, these patients with imaginal neglect either failed to generate the left side of mental images (Bisiach & Luzzatti, 1978), or suffered from a co-occurrence of deficits in automatic (bottom-up) and voluntary (top-down) orienting of attention. However, in Rode et al.'s experiment visual input was not directly relevant to the task; moreover, distraction from visual input might primarily manifest itself when representation guides somatomotor actions, beyond those involved in the generation and mental exploration of an internal map (Thomas, 1999). To explore these possibilities, we asked a patient with right hemisphere damage, R.D., to explore visual and imagined versions of a map of France in three conditions: (1) 'imagine the map in your mind' (imaginal); (2) 'describe a real map' (visual); and (3) 'list the names of French towns' (propositional). For the imaginal and visual conditions, verbal and manual pointing responses were collected; the task was also given before and after mental rotation of the map by 180 degrees . R.D. mentioned more towns on the right side of the map in the imaginal and visual conditions, but showed no representational deficit in the propositional condition. The rightward inner exploration bias in the imaginal and visual conditions was similar in magnitude and was not influenced by mental rotation or response type (verbal responses or manual pointing to locations on a map), thus suggesting that the representational deficit was robust and independent of perceptual input in R.D. Structural and diffusion MRI demonstrated damage to several white matter tracts in the right hemisphere and to the splenium of corpus callosum. A second right-brain damaged patient (P.P.), who showed signs of visual but not imaginal neglect, had damage to the same intra-hemispheric tracts, but the callosal connections were spared. Imaginal neglect in R.D. may result from fronto-parietal dysfunction impairing orientation towards left-sided items and posterior callosal disconnection preventing the symmetrical processing of spatial information from long-term memory. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  3. Concepts to Support HRP Integration Using Publications and Modeling

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Lumpkins, S.; Shelhamer, M.

    2014-01-01

    Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for building upon this proof-of-concept work to further support an integrated approach to human spaceflight risk reduction.

  4. Quantification of net annual C input in terrestrial ecosystems of the Italian Peninsula under different land-uses

    USDA-ARS?s Scientific Manuscript database

    Soil organic matter (SOM) is a very important compartment of the biosphere: it represents the largest dynamic carbon (C) pool where the C is stored for the longest time period. Root inputs, as exudates and root slush, represent a major, where not the largest, annual contribution to soil C input. Roo...

  5. Effects of Length of Retention Interval on Proactive Interference in Short-Term Visual Memory

    ERIC Educational Resources Information Center

    Meudell, Peter R.

    1977-01-01

    These experiments show two things: (a) In visual memory, long-term interference on a current item from items previously stored only seems to occur when the current item's retention interval is relatively long, and (b) the visual code appears to decay rapidly, reaching asymptote within 3 seconds of input in the presence of an interpolated task.…

  6. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  7. Visual-perceptual-kinesthetic inputs on influencing writing performances in children with handwriting difficulties.

    PubMed

    Tse, Linda F L; Thanapalan, Kannan C; Chan, Chetwyn C H

    2014-02-01

    This study investigated the role of visual-perceptual input in writing Chinese characters among senior school-aged children who had handwriting difficulties (CHD). The participants were 27 CHD (9-11 years old) and 61 normally developed control. There were three writing conditions: copying, and dictations with or without visual feedback. The motor-free subtests of the Developmental Test of Visual Perception (DTVP-2) were conducted. The CHD group showed significantly slower mean speeds of character production and less legibility of produced characters than the control group in all writing conditions (ps<0.001). There were significant deteriorations in legibility from copying to dictation without visual feedback. Nevertheless, the Group by Condition interaction effect was not statistically significant. Only position in space of DTVP-2 was significantly correlated with the legibility among CHD (r=-0.62, p=0.001). Poor legibility seems to be related to the less-intact spatial representation of the characters in working memory, which can be rectified by viewing the characters during writing. Visual feedback regarding one's own actions in writing can also improve legibility of characters among these children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  9. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  10. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  11. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  12. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  13. An integrated network visualization framework towards metabolic engineering applications.

    PubMed

    Noronha, Alberto; Vilaça, Paulo; Rocha, Miguel

    2014-12-30

    Over the last years, several methods for the phenotype simulation of microorganisms, under specified genetic and environmental conditions have been proposed, in the context of Metabolic Engineering (ME). These methods provided insight on the functioning of microbial metabolism and played a key role in the design of genetic modifications that can lead to strains of industrial interest. On the other hand, in the context of Systems Biology research, biological network visualization has reinforced its role as a core tool in understanding biological processes. However, it has been scarcely used to foster ME related methods, in spite of the acknowledged potential. In this work, an open-source software that aims to fill the gap between ME and metabolic network visualization is proposed, in the form of a plugin to the OptFlux ME platform. The framework is based on an abstract layer, where the network is represented as a bipartite graph containing minimal information about the underlying entities and their desired relative placement. The framework provides input/output support for networks specified in standard formats, such as XGMML, SBGN or SBML, providing a connection to genome-scale metabolic models. An user-interface makes it possible to edit, manipulate and query nodes in the network, providing tools to visualize diverse effects, including visual filters and aspect changing (e.g. colors, shapes and sizes). These tools are particularly interesting for ME, since they allow overlaying phenotype simulation results or elementary flux modes over the networks. The framework and its source code are freely available, together with documentation and other resources, being illustrated with well documented case studies.

  14. Dissociable neural correlates of contour completion and contour representation in illusory contour perception.

    PubMed

    Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren

    2012-10-01

    Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.

  15. A hybrid machine learning model to predict and visualize nitrate concentration throughout the Central Valley aquifer, California, USA

    USGS Publications Warehouse

    Ransom, Katherine M.; Nolan, Bernard T.; Traum, Jonathan A.; Faunt, Claudia; Bell, Andrew M.; Gronberg, Jo Ann M.; Wheeler, David C.; Zamora, Celia; Jurgens, Bryant; Schwarz, Gregory E.; Belitz, Kenneth; Eberts, Sandra; Kourakos, George; Harter, Thomas

    2017-01-01

    Intense demand for water in the Central Valley of California and related increases in groundwater nitrate concentration threaten the sustainability of the groundwater resource. To assess contamination risk in the region, we developed a hybrid, non-linear, machine learning model within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface. A database of 145 predictor variables representing well characteristics, historical and current field and landscape-scale nitrogen mass balances, historical and current land use, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The boosted regression tree (BRT) method was used to screen and rank variables to predict nitrate concentration at the depths of domestic and public well supplies. The novel approach included as predictor variables outputs from existing physically based models of the Central Valley. The top five most important predictor variables included two oxidation/reduction variables (probability of manganese concentration to exceed 50 ppb and probability of dissolved oxygen concentration to be below 0.5 ppm), field-scale adjusted unsaturated zone nitrogen input for the 1975 time period, average difference between precipitation and evapotranspiration during the years 1971–2000, and 1992 total landscape nitrogen input. Twenty-five variables were selected for the final model for log-transformed nitrate. In general, increasing probability of anoxic conditions and increasing precipitation relative to potential evapotranspiration had a corresponding decrease in nitrate concentration predictions. Conversely, increasing 1975 unsaturated zone nitrogen leaching flux and 1992 total landscape nitrogen input had an increasing relative impact on nitrate predictions. Three-dimensional visualization indicates that nitrate predictions depend on the probability of anoxic conditions and other factors, and that nitrate predictions generally decreased with increasing groundwater age.

  16. Flexible Environmental Modeling with Python and Open - GIS

    NASA Astrophysics Data System (ADS)

    Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann

    2015-04-01

    Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.

  17. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    PubMed

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  18. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    PubMed Central

    2013-01-01

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. Conclusions The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple “Google-style” searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature. PMID:23705910

  19. Feasibility of visual aids for risk evaluation by hospitalized patients with coronary artery disease: results from face-to-face interviews.

    PubMed

    Magliano, Carlos Alberto da Silva; Monteiro, Andrea Liborio; Tura, Bernardo Rangel; Oliveira, Claudia Silvia Rocha; Rebelo, Amanda Rebeca de Oliveira; Pereira, Claudia Cristina de Aguiar

    2018-01-01

    Communicating information about risk and probability to patients is considered a difficult task. In this study, we aim to evaluate the use of visual aids representing perioperative mortality and long-term survival in the communication process for patients diagnosed with coronary artery disease at the National Institute of Cardiology, a Brazilian public hospital specializing in cardiology. One-on-one interviews were conducted between August 1 and November 20, 2017. Patients were asked to imagine that their doctor was seeking their input in the decision regarding which treatment represented the best option for them. Patients were required to choose between alternatives by considering only the different benefits and risks shown in each scenario, described as the proportion of patients who had died during the perioperative period and within 5 years. Each participant evaluated the same eight scenarios. We evaluated their answers in a qualitative and quantitative analysis. The main findings were that all patients verbally expressed concern about perioperative mortality and that 25% did not express concern about long-term mortality. Twelve percent considered the probabilities irrelevant on the grounds that their prognosis would depend on "God's will." Ten percent of the patients disregarded the reported likelihood of perioperative mortality, deciding to focus solely on the "chance of being cured." In the quantitative analysis, the vast majority of respondents chose the "correct" alternatives, meaning that they made consistent and rational choices. The use of visual aids to present risk attributes appeared feasible in our sample. The impact of heuristics and religious beliefs on shared health decision making needs to be explored better in future studies.

  20. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  1. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    NASA Astrophysics Data System (ADS)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  3. ERGONOMICS ABSTRACTS 48983-49619.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…

  4. User-interactive electronic skin for instantaneous pressure visualization

    NASA Astrophysics Data System (ADS)

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components—thin-film transistor, pressure sensor and OLED arrays—are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  5. User-interactive electronic skin for instantaneous pressure visualization.

    PubMed

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components--thin-film transistor, pressure sensor and OLED arrays--are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  6. Bio-inspired approach to multistage image processing

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  7. Visual Input to the Drosophila Central Complex by Developmentally and Functionally Distinct Neuronal Populations.

    PubMed

    Omoto, Jaison Jiro; Keleş, Mehmet Fatih; Nguyen, Bao-Chau Minh; Bolanos, Cheyenne; Lovick, Jennifer Kelly; Frye, Mark Arthur; Hartenstein, Volker

    2017-04-24

    The Drosophila central brain consists of stereotyped neural lineages, developmental-structural units of macrocircuitry formed by the sibling neurons of single progenitors called neuroblasts. We demonstrate that the lineage principle guides the connectivity and function of neurons, providing input to the central complex, a collection of neuropil compartments important for visually guided behaviors. One of these compartments is the ellipsoid body (EB), a structure formed largely by the axons of ring (R) neurons, all of which are generated by a single lineage, DALv2. Two further lineages, DALcl1 and DALcl2, produce neurons that connect the anterior optic tubercle, a central brain visual center, with R neurons. Finally, DALcl1/2 receive input from visual projection neurons of the optic lobe medulla, completing a three-legged circuit that we call the anterior visual pathway (AVP). The AVP bears a fundamental resemblance to the sky-compass pathway, a visual navigation circuit described in other insects. Neuroanatomical analysis and two-photon calcium imaging demonstrate that DALcl1 and DALcl2 form two parallel channels, establishing connections with R neurons located in the peripheral and central domains of the EB, respectively. Although neurons of both lineages preferentially respond to bright objects, DALcl1 neurons have small ipsilateral, retinotopically ordered receptive fields, whereas DALcl2 neurons share a large excitatory receptive field in the contralateral hemifield. DALcl2 neurons become inhibited when the object enters the ipsilateral hemifield and display an additional excitation after the object leaves the field of view. Thus, the spatial position of a bright feature, such as a celestial body, may be encoded within this pathway. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    PubMed

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  9. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  10. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  11. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.

  12. [Visual input affects the expression of the early genes c-Fos and ZENK in auditory telencephalic centers of pied flycatcher nestlings during the acoustically-guided freezing].

    PubMed

    Korneeva, E V; Tiunova, A A; Aleksandrov, L I; Golubeva, T B; Anokhin, K V

    2014-01-01

    The present study analyzed expression of transcriptional factors c-Fos and ZENK in 9-day-old pied flycatcher nestlings' (Ficedula hypoleuca) telencephalic auditory centers (field L, caudomedial nidopallium and caudomedial mesopallium) involved in the acoustically-guided defense behavior. Species-typical alarm call was presented to the young in three groups: 1--intact group (sighted control), 2--nestlings visually deprived just before the experiment for a short time (unsighted control) 3--nestlings visually deprived right after hatching (experimental deprivation). Induction of c-Fos as well as ZENK in nestlings from the experimental deprivation group was decreased in both hemispheres as compared with intact group. In the group of unsighted control, only the decrease of c-Fos induction was observed exclusively in the right hemisphere. These findings suggest that limitation of visual input changes the population of neurons involved into the acoustically-guided behavior, the effect being dependant from the duration of deprivation.

  13. Dissociation and Convergence of the Dorsal and Ventral Visual Streams in the Human Prefrontal Cortex

    PubMed Central

    Takahashi, Emi; Ohki, Kenichi; Kim, Dae-Shik

    2012-01-01

    Visual information is largely processed through two pathways in the primate brain: an object pathway from the primary visual cortex to the temporal cortex (ventral stream) and a spatial pathway to the parietal cortex (dorsal stream). Whether and to what extent dissociation exists in the human prefrontal cortex (PFC) has long been debated. We examined anatomical connections from functionally defined areas in the temporal and parietal cortices to the PFC, using noninvasive functional and diffusion-weighted magnetic resonance imaging. The right inferior frontal gyrus (IFG) received converging input from both streams, while the right superior frontal gyrus received input only from the dorsal stream. Interstream functional connectivity to the IFG was dynamically recruited only when both object and spatial information were processed. These results suggest that the human PFC receives dissociated and converging visual pathways, and that the right IFG region serves as an integrator of the two types of information. PMID:23063444

  14. Direct visuomotor mapping for fast visually-evoked arm movements.

    PubMed

    Reynolds, Raymond F; Day, Brian L

    2012-12-01

    In contrast to conventional reaction time (RT) tasks, saccadic RT's to visual targets are very fast and unaffected by the number of possible targets. This can be explained by the sub-cortical circuitry underlying eye movements, which involves direct mapping between retinal input and motor output in the superior colliculus. Here we asked if the choice-invariance established for the eyes also applies to a special class of fast visuomotor responses of the upper limb. Using a target-pointing paradigm we observed very fast reaction times (<150 ms) which were completely unaffected as the number of possible target choices was increased from 1 to 4. When we introduced a condition of altered stimulus-response mapping, RT went up and a cost of choice was observed. These results can be explained by direct mapping between visual input and motor output, compatible with a sub-cortical pathway for visual control of the upper limb. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Looking for ideas: Eye behavior during goal-directed internally focused cognition☆

    PubMed Central

    Walcher, Sonja; Körner, Christof; Benedek, Mathias

    2017-01-01

    Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. PMID:28689088

  16. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  17. Neural dynamics of image representation in the primary visual cortex

    PubMed Central

    Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing

    2013-01-01

    Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role of computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption on computational theories of efficient coding (Olshausen and Field 1996), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al. 2006, von der Heydt et al. 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex. PMID:22944076

  18. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  19. Response Modality Variations Affect Determinations of Children's Learning Styles.

    ERIC Educational Resources Information Center

    Janowitz, Jeffrey M.

    The Swassing-Barbe Modality Index (SBMI) uses visual, auditory, and tactile inputs, but only reconstructed output, to measure children's modality strengths. In this experiment, the SBMI's three input modalities were crossed with two output modalities (spoken and drawn) in addition to the reconstructed standard to result in nine treatment…

  20. Reasoning over Taxonomic Change: Exploring Alignments for the Perelleschus Use Case

    PubMed Central

    Franz, Nico M.; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2015-01-01

    Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations. PMID:25700173

  1. The Influence of Visual Feedback and Register Changes on Sign Language Production: A Kinematic Study with Deaf Signers

    ERIC Educational Resources Information Center

    Emmorey, Karen; Gertsberg, Nelly; Korpics, Franco; Wright, Charles E.

    2009-01-01

    Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign…

  2. The role of pulvinar in the transmission of information in the visual hierarchy.

    PubMed

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.

  3. The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy

    PubMed Central

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750

  4. Visualization of local Ca2+ dynamics with genetically encoded bioluminescent reporters.

    PubMed

    Rogers, Kelly L; Stinnakre, Jacques; Agulhon, Cendra; Jublot, Delphine; Shorte, Spencer L; Kremer, Eric J; Brûlet, Philippe

    2005-02-01

    Measurements of local Ca2+ signalling at different developmental stages and/or in specific cell types is important for understanding aspects of brain functioning. The use of light excitation in fluorescence imaging can cause phototoxicity, photobleaching and auto-fluorescence. In contrast, bioluminescence does not require the input of radiative energy and can therefore be measured over long periods, with very high temporal resolution. Aequorin is a genetically encoded Ca(2+)-sensitive bioluminescent protein, however, its low quantum yield prevents dynamic measurements of Ca2+ responses in single cells. To overcome this limitation, we recently reported the bi-functional Ca2+ reporter gene, GFP-aequorin (GA), which was developed specifically to improve the light output and stability of aequorin chimeras [V. Baubet, et al., (2000) PNAS, 97, 7260-7265]. In the current study, we have genetically targeted GA to different microdomains important in synaptic transmission, including to the mitochondrial matrix, endoplasmic reticulum, synaptic vesicles and to the postsynaptic density. We demonstrate that these reporters enable 'real-time' measurements of subcellular Ca2+ changes in single mammalian neurons using bioluminescence. The high signal-to-noise ratio of these reporters is also important in that it affords the visualization of Ca2+ dynamics in cell-cell communication in neuronal cultures and tissue slices. Further, we demonstrate the utility of this approach in ex-vivo preparations of mammalian retina, a paradigm in which external light input should be controlled. This represents a novel molecular imaging approach for non-invasive monitoring of local Ca2+ dynamics and cellular communication in tissue or whole animal studies.

  5. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  6. Locomotor sensory organization test: a novel paradigm for the assessment of sensory contributions in gait.

    PubMed

    Chien, Jung Hung; Eikema, Diderik-Jan Anthony; Mukherjee, Mukul; Stergiou, Nicholas

    2014-12-01

    Feedback based balance control requires the integration of visual, proprioceptive and vestibular input to detect the body's movement within the environment. When the accuracy of sensory signals is compromised, the system reorganizes the relative contributions through a process of sensory recalibration, for upright postural stability to be maintained. Whereas this process has been studied extensively in standing using the Sensory Organization Test (SOT), less is known about these processes in more dynamic tasks such as locomotion. In the present study, ten healthy young adults performed the six conditions of the traditional SOT to quantify standing postural control when exposed to sensory conflict. The same subjects performed these six conditions using a novel experimental paradigm, the Locomotor SOT (LSOT), to study dynamic postural control during walking under similar types of sensory conflict. To quantify postural control during walking, the net Center of Pressure sway variability was used. This corresponds to the Performance Index of the center of pressure trajectory, which is used to quantify postural control during standing. Our results indicate that dynamic balance control during locomotion in healthy individuals is affected by the systematic manipulation of multisensory inputs. The sway variability patterns observed during locomotion reflect similar balance performance with standing posture, indicating that similar feedback processes may be involved. However, the contribution of visual input is significantly increased during locomotion, compared to standing in similar sensory conflict conditions. The increased visual gain in the LSOT conditions reflects the importance of visual input for the control of locomotion. Since balance perturbations tend to occur in dynamic tasks and in response to environmental constraints not present during the SOT, the LSOT may provide additional information for clinical evaluation on healthy and deficient sensory processing.

  7. The dark side of the alpha rhythm: fMRI evidence for induced alpha modulation during complete darkness.

    PubMed

    Ben-Simon, Eti; Podlipsky, Ilana; Okon-Singer, Hadas; Gruberger, Michal; Cvetkovic, Dean; Intrator, Nathan; Hendler, Talma

    2013-03-01

    The unique role of the EEG alpha rhythm in different states of cortical activity is still debated. The main theories regarding alpha function posit either sensory processing or attention allocation as the main processes governing its modulation. Closing and opening eyes, a well-known manipulation of the alpha rhythm, could be regarded as attention allocation from inward to outward focus though during light is also accompanied by visual change. To disentangle the effects of attention allocation and sensory visual input on alpha modulation, 14 healthy subjects were asked to open and close their eyes during conditions of light and of complete darkness while simultaneous recordings of EEG and fMRI were acquired. Thus, during complete darkness the eyes-open condition is not related to visual input but only to attention allocation, allowing direct examination of its role in alpha modulation. A data-driven ridge regression classifier was applied to the EEG data in order to ascertain the contribution of the alpha rhythm to eyes-open/eyes-closed inference in both lighting conditions. Classifier results revealed significant alpha contribution during both light and dark conditions, suggesting that alpha rhythm modulation is closely linked to the change in the direction of attention regardless of the presence of visual sensory input. Furthermore, fMRI activation maps derived from an alpha modulation time-course during the complete darkness condition exhibited a right frontal cortical network associated with attention allocation. These findings support the importance of top-down processes such as attention allocation to alpha rhythm modulation, possibly as a prerequisite to its known bottom-up processing of sensory input. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  8. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    PubMed

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Visual just noticeable differences

    NASA Astrophysics Data System (ADS)

    Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin

    2018-02-01

    A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.

  10. Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    PubMed Central

    Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas

    2011-01-01

    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432

  11. Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity

    PubMed Central

    Neri, Peter

    2010-01-01

    Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835

  12. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    PubMed

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. The effect of visual context on manual localization of remembered targets

    NASA Technical Reports Server (NTRS)

    Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.

    1997-01-01

    This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.

  14. From the optic tectum to the primary visual cortex: migration through evolution of the saliency map for exogenous attentional guidance.

    PubMed

    Zhaoping, Li

    2016-10-01

    Recent data have supported the hypothesis that, in primates, the primary visual cortex (V1) creates a saliency map from visual input. The exogenous guidance of attention is then realized by means of monosynaptic projections to the superior colliculus, which can select the most salient location as the target of a gaze shift. V1 is less prominent, or is even absent in lower vertebrates such as fish; whereas the superior colliculus, called optic tectum in lower vertebrates, also receives retinal input. I review the literature and propose that the saliency map has migrated from the tectum to V1 over evolution. In addition, attentional benefits manifested as cueing effects in humans should also be present in lower vertebrates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. In Search of a Visual-cortical Describing Function: a Summary of Work in Progress

    NASA Technical Reports Server (NTRS)

    Junker, A. M.; Peio, K. J.

    1984-01-01

    The thrust of the present work is to explore the utility of using a sum of sinusoids (seven or more) to obtain an evoked response and, furthermore, to see if the response is sensitive to changes in cognitive processing. Within the field of automatic control system technology, a mathematical input/output relationship for a sinusoidally stimulated nonlinear system is defined as describing function. Applying this technology, sum of sines inputs to yield describing functions for the visual-cortical response have been designed. What follows is a description of the method used to obtain visual-cortical describing functions. A number of measurement system redesigns were necessary to achieve the desired frequency resolution. Results that guided and came out of the redesigns are presented. Preliminary results of stimulus parameter effects (average intensity and depth of modulation) are also shown.

  16. Visualizing the Verbal and Verbalizing the Visual.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…

  17. Molecular biology of myopia.

    PubMed

    Schaeffel, Frank; Simon, Perikles; Feldkaemper, Marita; Ohngemach, Sibylle; Williams, Robert W

    2003-09-01

    Experiments in animal models of myopia have emphasised the importance of visual input in emmetropisation but it is also evident that the development of human myopia is influenced to some degree by genetic factors. Molecular genetic approaches can help to identify both the genes involved in the control of ocular development and the potential targets for pharmacological intervention. This review covers a variety of techniques that are being used to study the molecular biology of myopia. In the first part, we describe techniques used to analyse visually induced changes in gene expression: Northern Blot, polymerase chain reaction (PCR) and real-time PCR to obtain semi-quantitative and quantitative measures of changes in transcription level of a known gene, differential display reverse transcription PCR (DD-RT-PCR) to search for new genes that are controlled by visual input, rapid amplification of 5' cDNA (5'-RACE) to extend the 5' end of sequences that are regulated by visual input, in situ hybridisation to localise the expression of a given gene in a tissue and oligonucleotide microarray assays to simultaneously test visually induced changes in thousands of transcripts in single experiments. In the second part, we describe techniques that are used to localise regions in the genome that contain genes that are involved in the control of eye growth and refractive errors in mice and humans. These include quantitative trait loci (QTL) mapping, exploiting experimental test crosses of mice and transmission disequilibrium tests (TDT) in humans to find chromosomal intervals that harbour genes involved in myopia development. We review several successful applications of this battery of techniques in myopia research.

  18. The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions.

    ERIC Educational Resources Information Center

    Harth, E.; And Others

    1987-01-01

    Explains the hierarchic structure of the mammalian visual system. Proposes a model in which feedback pathways serve to modify sensory stimuli in ways that enhance and complete sensory input patterns. Investigates the functioning of the system through computer simulations. (ML)

  19. The Future of Access Technology for Blind and Visually Impaired People.

    ERIC Educational Resources Information Center

    Schreier, E. M.

    1990-01-01

    This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…

  20. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  1. Identifying Training Needs to Improve Indigenous Community Representatives Input into Environmental Resource Management Consultative Processes: A Case Study of the Bundjalung Nation

    ERIC Educational Resources Information Center

    Lloyd, David; Norrie, Fiona

    2004-01-01

    Despite increased engagement of Indigenous representatives as participants on consultative panels charged with processes of natural resource management, concerns have been raised by both Indigenous representatives and management agencies regarding the ability of Indigenous people to have quality input into the decisions these processes produce. In…

  2. Applications of self-organizing neural networks in virtual screening and diversity selection.

    PubMed

    Selzer, Paul; Ertl, Peter

    2006-01-01

    Artificial neural networks provide a powerful technique for the analysis and modeling of nonlinear relationships between molecular structures and pharmacological activity. Many network types, including Kohonen and counterpropagation, also provide an intuitive method for the visual assessment of correspondence between the input and output data. This work shows how a combination of neural networks and radial distribution function molecular descriptors can be applied in various areas of industrial pharmaceutical research. These applications include the prediction of biological activity, the selection of screening candidates (cherry picking), and the extraction of representative subsets from large compound collections such as combinatorial libraries. The methods described have also been implemented as an easy-to-use Web tool, allowing chemists to perform interactive neural network experiments on the Novartis intranet.

  3. Aircraft geometry verification with enhanced computer generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  4. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  5. Functional connectivity of visual cortex in the blind follows retinotopic organization principles.

    PubMed

    Striem-Amit, Ella; Ovadia-Caro, Smadar; Caramazza, Alfonso; Margulies, Daniel S; Villringer, Arno; Amedi, Amir

    2015-06-01

    Is visual input during critical periods of development crucial for the emergence of the fundamental topographical mapping of the visual cortex? And would this structure be retained throughout life-long blindness or would it fade as a result of plastic, use-based reorganization? We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any visual experience and be retained through life-long experience-dependent plasticity. Furthermore, retinotopic divisions of labour, such as that between the visual cortex regions normally representing the fovea and periphery, also form the basis for topographically-unique plastic changes in the blind. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.

  6. Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence.

    PubMed Central

    Hillyard, S A; Vogel, E K; Luck, S J

    1998-01-01

    Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs). PMID:9770220

  7. Transformation priming helps to disambiguate sudden changes of sensory inputs.

    PubMed

    Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen

    2015-11-01

    Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Airflow and optic flow mediate antennal positioning in flying honeybees

    PubMed Central

    Roy Khurana, Taruni; Sane, Sanjay P

    2016-01-01

    To maintain their speeds during navigation, insects rely on feedback from their visual and mechanosensory modalities. Although optic flow plays an essential role in speed determination, it is less reliable under conditions of low light or sparse landmarks. Under such conditions, insects rely on feedback from antennal mechanosensors but it is not clear how these inputs combine to elicit flight-related antennal behaviours. We here show that antennal movements of the honeybee, Apis mellifera, are governed by combined visual and antennal mechanosensory inputs. Frontal airflow, as experienced during forward flight, causes antennae to actively move forward as a sigmoidal function of absolute airspeed values. However, corresponding front-to-back optic flow causes antennae to move backward, as a linear function of relative optic flow, opposite the airspeed response. When combined, these inputs maintain antennal position in a state of dynamic equilibrium. DOI: http://dx.doi.org/10.7554/eLife.14449.001 PMID:27097104

  9. Attention Enhances Synaptic Efficacy and Signal-to-Noise in Neural Circuits

    PubMed Central

    Briggs, Farran; Mangun, George R.; Usrey, W. Martin

    2013-01-01

    Summary Attention is a critical component of perception. However, the mechanisms by which attention modulates neuronal communication to guide behavior are poorly understood. To elucidate the synaptic mechanisms of attention, we developed a sensitive assay of attentional modulation of neuronal communication. In alert monkeys performing a visual spatial attention task, we probed thalamocortical communication by electrically stimulating neurons in the lateral geniculate nucleus of the thalamus while simultaneously recording shock-evoked responses from monosynaptically connected neurons in primary visual cortex. We found that attention enhances neuronal communication by (1) increasing the efficacy of presynaptic input in driving postsynaptic responses, (2) increasing synchronous responses among ensembles of postsynaptic neurons receiving independent input, and (3) decreasing redundant signals between postsynaptic neurons receiving common input. These results demonstrate that attention finely tunes neuronal communication at the synaptic level by selectively altering synaptic weights, enabling enhanced detection of salient events in the noisy sensory milieu. PMID:23803766

  10. Synaptology of physiologically identified ganglion cells in the cat retina: a comparison of retinal X- and Y-cells.

    PubMed

    Weber, A J; Stanford, L R

    1994-05-15

    It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.

  11. The Role of Left Occipitotemporal Cortex in Reading: Reconciling Stimulus, Task, and Lexicality Effects

    PubMed Central

    Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.

    2013-01-01

    Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661

  12. Multisensory integration and the concert experience: An overview of how visual stimuli can affect what we hear

    NASA Astrophysics Data System (ADS)

    Hyde, Jerald R.

    2004-05-01

    It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.

  13. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  14. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  15. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  16. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  17. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    ERIC Educational Resources Information Center

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…

  18. Proprioceptive versus Visual Control in Autistic Children.

    ERIC Educational Resources Information Center

    Masterton, B. A.; Biederman, G. B.

    1983-01-01

    The autistic children's presumed preference for proximal over distal sensory input was studied by requiring that "autistic," retarded, and "normal" children (7-15 years old) adapt to lateral displacement of the visual field. Only autistic Ss demonstrated transfer of adaptation to the nonadapted hand, indicating reliance on proprioception rather…

  19. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  20. Visual Predictions in the Orbitofrontal Cortex Rely on Associative Content

    PubMed Central

    Chaumon, Maximilien; Kveraga, Kestutis; Barrett, Lisa Feldman; Bar, Moshe

    2014-01-01

    Predicting upcoming events from incomplete information is an essential brain function. The orbitofrontal cortex (OFC) plays a critical role in this process by facilitating recognition of sensory inputs via predictive feedback to sensory cortices. In the visual domain, the OFC is engaged by low spatial frequency (LSF) and magnocellular-biased inputs, but beyond this, we know little about the information content required to activate it. Is the OFC automatically engaged to analyze any LSF information for meaning? Or is it engaged only when LSF information matches preexisting memory associations? We tested these hypotheses and show that only LSF information that could be linked to memory associations engages the OFC. Specifically, LSF stimuli activated the OFC in 2 distinct medial and lateral regions only if they resembled known visual objects. More identifiable objects increased activity in the medial OFC, known for its function in affective responses. Furthermore, these objects also increased the connectivity of the lateral OFC with the ventral visual cortex, a crucial region for object identification. At the interface between sensory, memory, and affective processing, the OFC thus appears to be attuned to the associative content of visual information and to play a central role in visuo-affective prediction. PMID:23771980

  1. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  2. Octopus vulgaris uses visual information to determine the location of its arm.

    PubMed

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Embodied attention and word learning by toddlers

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116

  4. "Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.

    PubMed

    Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina

    2015-09-16

    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.

  5. Directed nucleation assembly of DNA tile complexes for barcode-patterned lattices

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Labean, Thomas H.; Feng, Liping; Reif, John H.

    2003-07-01

    The programmed self-assembly of patterned aperiodic molecular structures is a major challenge in nanotechnology and has numerous potential applications for nanofabrication of complex structures and useful devices. Here we report the construction of an aperiodic patterned DNA lattice (barcode lattice) by a self-assembly process of directed nucleation of DNA tiles around a scaffold DNA strand. The input DNA scaffold strand, constructed by ligation of shorter synthetic oligonucleotides, provides layers of the DNA lattice with barcode patterning information represented by the presence or absence of DNA hairpin loops protruding out of the lattice plane. Self-assembly of multiple DNA tiles around the scaffold strand was shown to result in a patterned lattice containing barcode information of 01101. We have also demonstrated the reprogramming of the system to another patterning. An inverted barcode pattern of 10010 was achieved by modifying the scaffold strands and one of the strands composing each tile. A ribbon lattice, consisting of repetitions of the barcode pattern with expected periodicity, was also constructed by the addition of sticky ends. The patterning of both classes of lattices was clearly observable via atomic force microscopy. These results represent a step toward implementation of a visual readout system capable of converting information encoded on a 1D DNA strand into a 2D form readable by advanced microscopic techniques. A functioning visual output method would not only increase the readout speed of DNA-based computers, but may also find use in other sequence identification techniques such as mutation or allele mapping.

  6. Directed nucleation assembly of DNA tile complexes for barcode-patterned lattices.

    PubMed

    Yan, Hao; LaBean, Thomas H; Feng, Liping; Reif, John H

    2003-07-08

    The programmed self-assembly of patterned aperiodic molecular structures is a major challenge in nanotechnology and has numerous potential applications for nanofabrication of complex structures and useful devices. Here we report the construction of an aperiodic patterned DNA lattice (barcode lattice) by a self-assembly process of directed nucleation of DNA tiles around a scaffold DNA strand. The input DNA scaffold strand, constructed by ligation of shorter synthetic oligonucleotides, provides layers of the DNA lattice with barcode patterning information represented by the presence or absence of DNA hairpin loops protruding out of the lattice plane. Self-assembly of multiple DNA tiles around the scaffold strand was shown to result in a patterned lattice containing barcode information of 01101. We have also demonstrated the reprogramming of the system to another patterning. An inverted barcode pattern of 10010 was achieved by modifying the scaffold strands and one of the strands composing each tile. A ribbon lattice, consisting of repetitions of the barcode pattern with expected periodicity, was also constructed by the addition of sticky ends. The patterning of both classes of lattices was clearly observable via atomic force microscopy. These results represent a step toward implementation of a visual readout system capable of converting information encoded on a 1D DNA strand into a 2D form readable by advanced microscopic techniques. A functioning visual output method would not only increase the readout speed of DNA-based computers, but may also find use in other sequence identification techniques such as mutation or allele mapping.

  7. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  8. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  9. Deep learning of orthographic representations in baboons.

    PubMed

    Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan

    2014-01-01

    What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.

  10. Thalamic projections to visual and visuomotor areas (V6 and V6A) in the Rostral Bank of the parieto-occipital sulcus of the Macaque.

    PubMed

    Gamberini, Michela; Bakola, Sophia; Passarelli, Lauretta; Burman, Kathleen J; Rosa, Marcello G P; Fattori, Patrizia; Galletti, Claudio

    2016-04-01

    The medial posterior parietal cortex of the primate brain includes different functional areas, which have been defined based on the functional properties, cyto- and myeloarchitectural criteria, and cortico-cortical connections. Here, we describe the thalamic projections to two of these areas (V6 and V6A), based on 14 retrograde neuronal tracer injections in 11 hemispheres of 9 Macaca fascicularis. The injections were placed either by direct visualisation or using electrophysiological guidance, and the location of injection sites was determined post mortem based on cyto- and myeloarchitectural criteria. We found that the majority of the thalamic afferents to the visual area V6 originate in subdivisions of the lateral and inferior pulvinar nuclei, with weaker inputs originating from the central densocellular, paracentral, lateral posterior, lateral geniculate, ventral anterior and mediodorsal nuclei. In contrast, injections in both the dorsal and ventral parts of the visuomotor area V6A revealed strong inputs from the lateral posterior and medial pulvinar nuclei, as well as smaller inputs from the ventrolateral complex and from the central densocellular, paracentral, and mediodorsal nuclei. These projection patterns are in line with the functional properties of injected areas: "dorsal stream" extrastriate area V6 receives information from visuotopically organised subdivisions of the thalamus; whereas visuomotor area V6A, which is involved in the sensory guidance of arm movement, receives its primary afferents from thalamic nuclei that provide high-order somatic and visual input.

  11. Influence of both cutaneous input from the foot soles and visual information on the control of postural stability in dyslexic children.

    PubMed

    Goulème, Nathalie; Villeneuve, Philippe; Gérard, Christophe-Loïc; Bucci, Maria Pia

    2017-07-01

    Dyslexic children show impaired in postural stability. The aim of our study was to test the influence of foot soles and visual information on the postural control of dyslexic children, compared to non-dyslexic children. Postural stability was evaluated with TechnoConcept ® platform in twenty-four dyslexic children (mean age: 9.3±0.29years) and in twenty-four non-dyslexic children, gender- and age-matched, in two postural conditions (with and without foam: a 4-mm foam was put under their feet or not) and in two visual conditions (eyes open and eyes closed). We measured the surface area, the length and the mean velocity of the center of pressure (CoP). Moreover, we calculated the Romberg Quotient (RQ). Our results showed that the surface area, length and mean velocity of the CoP were significantly greater in the dyslexic children compared to the non-dyslexic children, particularly with foam and eyes closed. Furthermore, the RQ was significantly smaller in the dyslexic children and significantly greater without foam than with foam. All these findings suggest that dyslexic children are not able to compensate with other available inputs when sensorial inputs are less informative (with foam, or eyes closed), which results in poor postural stability. We suggest that the impairment of the cerebellar integration of all the sensorial inputs is responsible for the postural deficits observed in dyslexic children. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  13. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  14. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  15. Fiber Laser Welding-Brazing Characteristics of Dissimilar Metals AZ31B Mg Alloys to Copper with Mg-Based Filler

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoye; Tan, Caiwang; Meng, Shenghao; Chen, Bo; Song, Xiaoguo; Li, Liqun; Feng, Jicai

    2018-03-01

    Fiber laser welding-brazing of 1-mm-thick AZ31B Mg alloys to 1.5-mm-thick copper (T2) with Mg-based filler was performed in a lap configuration. The weld appearance, interfacial microstructure and mechanical properties were investigated with different heat inputs. The results indicated that processing windows for optimizing appropriate welding parameters were relatively narrow in this case. Visually acceptable joints with certain strength were achieved at appropriate welding parameters. The maximum tensile-shear fracture load of laser-welded-brazed Mg/Cu joint could reach 1730 N at the laser power of 1200 W, representing 64.1% joint efficiency relative to AZ31Mg base metal. The eutectic structure (α-Mg + Mg2Cu) and Mg-Cu intermetallic compound was observed at the Mg/Cu interface, and Mg-Al-Cu ternary intermetallic compound were identified between intermetallics and eutectic structure at high heat input. All the joints fractured at the Mg-Cu interface. However, the fracture mode was found to differ. For laser power of 1200 W, the surface was characterized by tearing edge, while that with poor joint strength was almost dominated by smooth surface or flat tear pattern.

  16. Visual processing in anorexia nervosa and body dysmorphic disorder: similarities, differences, and future research directions

    PubMed Central

    Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.

    2013-01-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196

  17. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  18. Working with and Visualizing Big Data Efficiently with Python for the DARPA XDATA Program

    DTIC Science & Technology

    2017-08-01

    same function to be used with scalar inputs, input arrays of the same shape, or even input arrays of dimensionality in some cases. Most of the math ... math operations on values ● Split-apply-combine: similar to group-by operations in databases ● Join: combine two datasets using common columns 4.3.3...Numba - Continue to increase SIMD performance with support for fast math flags and improved support for AVX, Intel’s large vector

  19. The locus of impairment in English developmental letter position dyslexia

    PubMed Central

    Kezilas, Yvette; Kohnen, Saskia; McKague, Meredith; Castles, Anne

    2014-01-01

    Many children with reading difficulties display phonological deficits and struggle to acquire non-lexical reading skills. However, not all children with reading difficulties have these problems, such as children with selective letter position dyslexia (LPD), who make excessive migration errors (such as reading slime as “smile”). Previous research has explored three possible loci for the deficit – the phonological output buffer, the orthographic input lexicon, and the orthographic-visual analysis stage of reading. While there is compelling evidence against a phonological output buffer and orthographic input lexicon deficit account of English LPD, the evidence in support of an orthographic-visual analysis deficit is currently limited. In this multiple single-case study with three English-speaking children with developmental LPD, we aimed to both replicate and extend previous findings regarding the locus of impairment in English LPD. First, we ruled out a phonological output buffer and an orthographic input lexicon deficit by administering tasks that directly assess phonological processing and lexical guessing. We then went on to directly assess whether or not children with LPD have an orthographic-visual analysis deficit by modifying two tasks that have previously been used to localize processing at this level: a same-different decision task and a non-word reading task. The results from these tasks indicate that LPD is most likely caused by a deficit specific to the coding of letter positions at the orthographic-visual analysis stage of reading. These findings provide further evidence for the heterogeneity of dyslexia and its underlying causes. PMID:24917802

  20. Color opponent receptive fields self-organize in a biophysical model of visual cortex via spike-timing dependent plasticity

    PubMed Central

    Eguchi, Akihiro; Neymotin, Samuel A.; Stringer, Simon M.

    2014-01-01

    Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red–green and blue–yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell). PMID:24659956

  1. Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions.

    PubMed

    Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C

    2018-05-09

    Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Throwing out the rules: anticipatory alpha-band oscillatory attention mechanisms during task-set reconfigurations.

    PubMed

    Foxe, John J; Murphy, Jeremy W; De Sanctis, Pierfilippo

    2014-06-01

    We assessed the role of alpha-band oscillatory activity during a task-switching design that required participants to switch between an auditory and a visual task, while task-relevant audiovisual inputs were simultaneously presented. Instructional cues informed participants which task to perform on a given trial and we assessed alpha-band power in the short 1.35-s period intervening between the cue and the task-imperative stimuli, on the premise that attentional biasing mechanisms would be deployed to resolve competition between the auditory and visual inputs. Prior work had shown that alpha-band activity was differentially deployed depending on the modality of the cued task. Here, we asked whether this activity would, in turn, be differentially deployed depending on whether participants had just made a switch of task or were being asked to simply repeat the task. It is well established that performance speed and accuracy are poorer on switch than on repeat trials. Here, however, the use of instructional cues completely mitigated these classic switch-costs. Measures of alpha-band synchronisation and desynchronisation showed that there was indeed greater and earlier differential deployment of alpha-band activity on switch vs. repeat trials. Contrary to our hypothesis, this differential effect was entirely due to changes in the amount of desynchronisation observed during switch and repeat trials of the visual task, with more desynchronisation over both posterior and frontal scalp regions during switch-visual trials. These data imply that particularly vigorous, and essentially fully effective, anticipatory biasing mechanisms resolved the competition between competing auditory and visual inputs when a rapid switch of task was required. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  4. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection during Visual Search?

    ERIC Educational Resources Information Center

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by…

  5. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    ERIC Educational Resources Information Center

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  6. Learning to Look for Language: Development of Joint Attention in Young Deaf Children

    ERIC Educational Resources Information Center

    Lieberman, Amy M.; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve…

  7. Relationship between Odor Identification and Visual Distractors in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kumazaki, Hirokazu; Kikuchi, Mitsuru; Yoshimura, Yuko; Miyao, Masutomo; Okada, Ken-ichi; Mimura, Masaru; Minabe, Yoshio

    2018-01-01

    Understanding the nature of olfactory abnormalities is crucial for optimal interventions in children with autism spectrum disorders (ASD). However, previous studies that have investigated odor identification in children with ASD have produced inconsistent results. The ability to correctly identify an odor relies heavily on visual inputs in the…

  8. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Vocabulary Learning through Viewing Video: The Effect of Two Enhancement Techniques

    ERIC Educational Resources Information Center

    Montero Perez, Maribel; Peters, Elke; Desmet, Piet

    2018-01-01

    While most studies on L2 vocabulary learning through input have addressed learners' vocabulary uptake from written text, this study focuses on audio-visual input. In particular, we investigate the effects of enhancing video by (1) adding different types of L2 subtitling (i.e. no captioning, full captioning, keyword captioning, and glossed keyword…

  10. Central Cross-Talk in Task Switching : Evidence from Manipulating Input-Output Modality Compatibility

    ERIC Educational Resources Information Center

    Stephan, Denise Nadine; Koch, Iring

    2010-01-01

    Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory-vocal vs. visual-manual) and…

  11. Low-cost USB interface for operant research using Arduino and Visual Basic.

    PubMed

    Escobar, Rogelio; Pérez-Herrera, Carlos A

    2015-03-01

    This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.

  12. TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers.

    PubMed

    Ptito, M; Fumal, A; de Noordhout, A Martens; Schoenen, J; Gjedde, A; Kupers, R

    2008-01-01

    Various non-visual inputs produce cross-modal responses in the visual cortex of early blind subjects. In order to determine the qualitative experience associated with these occipital activations, we systematically stimulated the entire occipital cortex using single pulse transcranial magnetic stimulation (TMS) in early blind subjects and in blindfolded seeing controls. Whereas blindfolded seeing controls reported only phosphenes following occipital cortex stimulation, some of the blind subjects reported tactile sensations in the fingers that were somatotopically organized onto the visual cortex. The number of cortical sites inducing tactile sensations appeared to be related to the number of hours of Braille reading per day, Braille reading speed and dexterity. These data, taken in conjunction with previous anatomical, behavioural and functional imaging results, suggest the presence of a polysynaptic cortical pathway between the somatosensory cortex and the visual cortex in early blind subjects. These results also add new evidence that the activity of the occipital lobe in the blind takes its qualitative expression from the character of its new input source, therefore supporting the cortical deference hypothesis.

  13. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    NASA Astrophysics Data System (ADS)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  14. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  15. Emotion and the Cardiovascular System: Postulated Role of Inputs From the Medial Prefrontal Cortex to the Dorsolateral Periaqueductal Gray.

    PubMed

    Dampney, Roger

    2018-01-01

    The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates.

  16. Emotion and the Cardiovascular System: Postulated Role of Inputs From the Medial Prefrontal Cortex to the Dorsolateral Periaqueductal Gray

    PubMed Central

    Dampney, Roger

    2018-01-01

    The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates. PMID:29881334

  17. The Synaptic and Morphological Basis of Orientation Selectivity in a Polyaxonal Amacrine Cell of the Rabbit Retina.

    PubMed

    Murphy-Baum, Benjamin L; Taylor, W Rowland

    2015-09-30

    Much of the computational power of the retina derives from the activity of amacrine cells, a large and diverse group of GABAergic and glycinergic inhibitory interneurons. Here, we identify an ON-type orientation-selective, wide-field, polyaxonal amacrine cell (PAC) in the rabbit retina and demonstrate how its orientation selectivity arises from the structure of the dendritic arbor and the pattern of excitatory and inhibitory inputs. Excitation from ON bipolar cells and inhibition arising from the OFF pathway converge to generate a quasi-linear integration of visual signals in the receptive field center. This serves to suppress responses to high spatial frequencies, thereby improving sensitivity to larger objects and enhancing orientation selectivity. Inhibition also regulates the magnitude and time course of excitatory inputs to this PAC through serial inhibitory connections onto the presynaptic terminals of ON bipolar cells. This presynaptic inhibition is driven by graded potentials within local microcircuits, similar in extent to the size of single bipolar cell receptive fields. Additional presynaptic inhibition is generated by spiking amacrine cells on a larger spatial scale covering several hundred microns. The orientation selectivity of this PAC may be a substrate for the inhibition that mediates orientation selectivity in some types of ganglion cells. Significance statement: The retina comprises numerous excitatory and inhibitory circuits that encode specific features in the visual scene, such as orientation, contrast, or motion. Here, we identify a wide-field inhibitory neuron that responds to visual stimuli of a particular orientation, a feature selectivity that is primarily due to the elongated shape of the dendritic arbor. Integration of convergent excitatory and inhibitory inputs from the ON and OFF visual pathways suppress responses to small objects and fine textures, thus enhancing selectivity for larger objects. Feedback inhibition regulates the strength and speed of excitation on both local and wide-field spatial scales. This study demonstrates how different synaptic inputs are regulated to tune a neuron to respond to specific features in the visual scene. Copyright © 2015 the authors 0270-6474/15/3513336-15$15.00/0.

  18. Software Aids In Graphical Depiction Of Flow Data

    NASA Technical Reports Server (NTRS)

    Stegeman, J. D.

    1995-01-01

    Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).

  19. Numerical simulation of human orientation perception during lunar landing

    NASA Astrophysics Data System (ADS)

    Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.

    2011-09-01

    In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.

  20. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System

    PubMed Central

    Ajina, Sara; Bridge, Holly

    2017-01-01

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337

  1. Efficient encoding of motion is mediated by gap junctions in the fly visual system.

    PubMed

    Wang, Siwei; Borst, Alexander; Zaslavsky, Noga; Tishby, Naftali; Segev, Idan

    2017-12-01

    Understanding the computational implications of specific synaptic connectivity patterns is a fundamental goal in neuroscience. In particular, the computational role of ubiquitous electrical synapses operating via gap junctions remains elusive. In the fly visual system, the cells in the vertical-system network, which play a key role in visual processing, primarily connect to each other via axonal gap junctions. This network therefore provides a unique opportunity to explore the functional role of gap junctions in sensory information processing. Our information theoretical analysis of a realistic VS network model shows that within 10 ms following the onset of the visual input, the presence of axonal gap junctions enables the VS system to efficiently encode the axis of rotation, θ, of the fly's ego motion. This encoding efficiency, measured in bits, is near-optimal with respect to the physical limits of performance determined by the statistical structure of the visual input itself. The VS network is known to be connected to downstream pathways via a subset of triplets of the vertical system cells; we found that because of the axonal gap junctions, the efficiency of this subpopulation in encoding θ is superior to that of the whole vertical system network and is robust to a wide range of signal to noise ratios. We further demonstrate that this efficient encoding of motion by this subpopulation is necessary for the fly's visually guided behavior, such as banked turns in evasive maneuvers. Because gap junctions are formed among the axons of the vertical system cells, they only impact the system's readout, while maintaining the dendritic input intact, suggesting that the computational principles implemented by neural circuitries may be much richer than previously appreciated based on point neuron models. Our study provides new insights as to how specific network connectivity leads to efficient encoding of sensory stimuli.

  2. Altered white matter in early visual pathways of humans with amblyopia.

    PubMed

    Allen, Brian; Spiegel, Daniel P; Thompson, Benjamin; Pestilli, Franco; Rokers, Bas

    2015-09-01

    Amblyopia is a visual disorder caused by poorly coordinated binocular input during development. Little is known about the impact of amblyopia on the white matter within the visual system. We studied the properties of six major visual white-matter pathways in a group of adults with amblyopia (n=10) and matched controls (n=10) using diffusion weighted imaging (DWI) and fiber tractography. While we did not find significant differences in diffusion properties in cortico-cortical pathways, patients with amblyopia exhibited increased mean diffusivity in thalamo-cortical visual pathways. These findings suggest that amblyopia may systematically alter the white matter properties of early visual pathways. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Segregation of Visual Response Properties in the Mouse Superior Colliculus and Their Modulation during Locomotion

    PubMed Central

    2017-01-01

    The superior colliculus (SC) receives direct input from the retina and integrates it with information about sound, touch, and state of the animal that is relayed from other parts of the brain to initiate specific behavioral outcomes. The superficial SC layers (sSC) contain cells that respond to visual stimuli, whereas the deep SC layers (dSC) contain cells that also respond to auditory and somatosensory stimuli. Here, we used a large-scale silicon probe recording system to examine the visual response properties of SC cells of head-fixed and alert male mice. We found cells with diverse response properties including: (1) orientation/direction-selective (OS/DS) cells with a firing rate that is suppressed by drifting sinusoidal gratings (negative OS/DS cells); (2) suppressed-by-contrast cells; (3) cells with complex-like spatial summation nonlinearity; and (4) cells with Y-like spatial summation nonlinearity. We also found specific response properties that are enriched in different depths of the SC. The sSC is enriched with cells with small RFs, high evoked firing rates (FRs), and sustained temporal responses, whereas the dSC is enriched with the negative OS/DS cells and with cells with large RFs, low evoked FRs, and transient temporal responses. Locomotion modulates the activity of the SC cells both additively and multiplicatively and changes the preferred spatial frequency of some SC cells. These results provide the first description of the negative OS/DS cells and demonstrate that the SC segregates cells with different response properties and that the behavioral state of a mouse affects SC activity. SIGNIFICANCE STATEMENT The superior colliculus (SC) receives visual input from the retina in its superficial layers (sSC) and induces eye/head-orientating movements and innate defensive responses in its deeper layers (dSC). Despite their importance, very little is known about the visual response properties of dSC neurons. Using high-density electrode recordings and novel model-based analysis, we found several novel visual response properties of the SC cells, including encoding of a cell's preferred orientation or direction by suppression of the firing rate. The sSC and the dSC are enriched with cells with different visual response properties. Locomotion modulates the cells in the SC. These findings contribute to our understanding of how the SC processes visual inputs, a critical step in comprehending visually guided behaviors. PMID:28760858

  4. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  5. Object perception is selectively slowed by a visually similar working memory load.

    PubMed

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  6. Deep imitation learning for 3D navigation tasks.

    PubMed

    Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina

    2018-01-01

    Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  7. Anodal transcranial direct current stimulation transiently improves contrast sensitivity and normalizes visual cortex activation in individuals with amblyopia.

    PubMed

    Spiegel, Daniel P; Byblow, Winston D; Hess, Robert F; Thompson, Benjamin

    2013-10-01

    Amblyopia is a neurodevelopmental disorder of vision that is associated with abnormal patterns of neural inhibition within the visual cortex. This disorder is often considered to be untreatable in adulthood because of insufficient visual cortex plasticity. There is increasing evidence that interventions that target inhibitory interactions within the visual cortex, including certain types of noninvasive brain stimulation, can improve visual function in adults with amblyopia. We tested the hypothesis that anodal transcranial direct current stimulation (a-tDCS) would improve visual function in adults with amblyopia by enhancing the neural response to inputs from the amblyopic eye. Thirteen adults with amblyopia participated and contrast sensitivity in the amblyopic and fellow fixing eye was assessed before, during and after a-tDCS or cathodal tDCS (c-tDCS). Five participants also completed a functional magnetic resonance imaging (fMRI) study designed to investigate the effect of a-tDCS on the blood oxygen level-dependent response within the visual cortex to inputs from the amblyopic versus the fellow fixing eye. A subgroup of 8/13 participants showed a transient improvement in amblyopic eye contrast sensitivity for at least 30 minutes after a-tDCS. fMRI measurements indicated that the characteristic cortical response asymmetry in amblyopes, which favors the fellow eye, was reduced by a-tDCS. These preliminary results suggest that a-tDCS deserves further investigation as a potential tool to enhance amblyopia treatment outcomes in adults.

  8. Gravity influences top-down signals in visual processing.

    PubMed

    Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph

    2014-01-01

    Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.

  9. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    PubMed Central

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  10. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  11. Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks

    PubMed Central

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2018-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980

  12. CAMBerVis: visualization software to support comparative analysis of multiple bacterial strains.

    PubMed

    Woźniak, Michał; Wong, Limsoon; Tiuryn, Jerzy

    2011-12-01

    A number of inconsistencies in genome annotations are documented among bacterial strains. Visualization of the differences may help biologists to make correct decisions in spurious cases. We have developed a visualization tool, CAMBerVis, to support comparative analysis of multiple bacterial strains. The software manages simultaneous visualization of multiple bacterial genomes, enabling visual analysis focused on genome structure annotations. The CAMBerVis software is freely available at the project website: http://bioputer.mimuw.edu.pl/camber. Input datasets for Mycobacterium tuberculosis and Staphylocacus aureus are integrated with the software as examples. m.wozniak@mimuw.edu.pl Supplementary data are available at Bioinformatics online.

  13. Structural organization of parallel information processing within the tectofugal visual system of the pigeon.

    PubMed

    Hellmann, B; Güntürkün, O

    2001-01-01

    Visual information processing within the ascending tectofugal pathway to the forebrain undergoes essential rearrangements between the mesencephalic tectum opticum and the diencephalic nucleus rotundus of birds. The outer tectal layers constitute a two-dimensional map of the visual surrounding, whereas nucleus rotundus is characterized by functional domains in which different visual features such as movement, color, or luminance are processed in parallel. Morphologic correlates of this reorganization were investigated by means of focal injections of the neuronal tracer choleratoxin subunit B into different regions of the nuclei rotundus and triangularis of the pigeon. Dependent on the thalamic injection site, variations in the retrograde labeling pattern of ascending tectal efferents were observed. All rotundal projecting neurons were located within the deep tectal layer 13. Five different cell populations were distinguished that could be differentiated according to their dendritic ramifications within different retinorecipient laminae and their axons projecting to different subcomponents of the nucleus rotundus. Because retinorecipient tectal layers differ in their input from distinct classes of retinal ganglion cells, each tectorotundal cell type probably processes different aspects of the visual surrounding. Therefore, the differential input/output connections of the five tectorotundal cell groups might constitute the structural basis for spatially segregated parallel information processing of different stimulus aspects within the tectofugal visual system. Because two of five rotundal projecting cell groups additionally exhibited quantitative shifts along the dorsoventral extension of the tectum, data also indicate visual field-dependent alterations in information processing for particular visual features. Copyright 2001 Wiley-Liss, Inc.

  14. Three-Dimensional Analysis of Spiny Dendrites Using Straightening and Unrolling Transforms

    PubMed Central

    Morales, Juan; Benavides-Piccione, Ruth; Pastor, Luis; Yuste, Rafael; DeFelipe, Javier

    2014-01-01

    Current understanding of the synaptic organization of the brain depends to a large extent on knowledge about the synaptic inputs to the neurons. Indeed, the dendritic surfaces of pyramidal cells (the most common neuron in the cerebral cortex) are covered by thin protrusions named dendritic spines. These represent the targets of most excitatory synapses in the cerebral cortex and therefore, dendritic spines prove critical in learning, memory and cognition. This paper presents a new method that facilitates the analysis of the 3D structure of spine insertions in dendrites, providing insight on spine distribution patterns. This method is based both on the implementation of straightening and unrolling transformations to move the analysis process to a planar, unfolded arrangement, and on the design of DISPINE, an interactive environment that supports the visual analysis of 3D patterns. PMID:22644869

  15. Spatial asymmetry in tactile sensor skin deformation aids perception of edge orientation during haptic exploration.

    PubMed

    Ponce Wong, Ruben D; Hellman, Randall B; Santos, Veronica J

    2014-01-01

    Upper-limb amputees rely primarily on visual feedback when using their prostheses to interact with others or objects in their environment. A constant reliance upon visual feedback can be mentally exhausting and does not suffice for many activities when line-of-sight is unavailable. Upper-limb amputees could greatly benefit from the ability to perceive edges, one of the most salient features of 3D shape, through touch alone. We present an approach for estimating edge orientation with respect to an artificial fingertip through haptic exploration using a multimodal tactile sensor on a robot hand. Key parameters from the tactile signals for each of four exploratory procedures were used as inputs to a support vector regression model. Edge orientation angles ranging from -90 to 90 degrees were estimated with an 85-input model having an R (2) of 0.99 and RMS error of 5.08 degrees. Electrode impedance signals provided the most useful inputs by encoding spatially asymmetric skin deformation across the entire fingertip. Interestingly, sensor regions that were not in direct contact with the stimulus provided particularly useful information. Methods described here could pave the way for semi-autonomous capabilities in prosthetic or robotic hands during haptic exploration, especially when visual feedback is unavailable.

  16. A supramodal brain substrate of word form processing--an fMRI study on homonym finding with auditory and visual input.

    PubMed

    Balthasar, Andrea J R; Huber, Walter; Weis, Susanne

    2011-09-02

    Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Dynamic Target Match Signals in Perirhinal Cortex Can Be Explained by Instantaneous Computations That Act on Dynamic Input from Inferotemporal Cortex

    PubMed Central

    Pagan, Marino

    2014-01-01

    Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10–15 ms. Accounting for these dynamics in PRH did not require proposing dynamic computations within PRH itself but, rather, could be attributed to instantaneous PRH computations performed upon an input representation from IT that changed with time. We found that the dynamics of the IT representation arose from two commonly observed features: individual IT neurons whose response preferences were not simply rescaled with time and variable response latencies across the population. Our results demonstrate that these types of time-varying responses have important consequences for downstream computation and suggest that dynamic representations can arise within a feedforward framework as a consequence of instantaneous computations performed upon time-varying inputs. PMID:25122904

  18. Ankylosing Spondylitis and Posture Control: The Role of Visual Input

    PubMed Central

    De Nunzio, Alessandro Marco; Iervolino, Salvatore; Zincarelli, Carmela; Di Gioia, Luisa; Rengo, Giuseppe; Multari, Vincenzo; Peluso, Rosario; Di Minno, Matteo Nicola Dario; Pappone, Nicola

    2015-01-01

    Objectives. To assess the motor control during quiet stance in patients with established ankylosing spondylitis (AS) and to evaluate the effect of visual input on the maintenance of a quiet posture. Methods. 12 male AS patients (mean age 50.1 ± 13.2 years) and 12 matched healthy subjects performed 2 sessions of 3 trials in quiet stance, with eyes open (EO) and with eyes closed (EC) on a baropodometric platform. The oscillation of the centre of feet pressure (CoP) was acquired. Indices of stability and balance control were assessed by the sway path (SP) of the CoP, the frequency bandwidth (FB1) that includes the 80% of the area under the amplitude spectrum, the mean amplitude of the peaks (MP) of the sway density curve (SDC), and the mean distance (MD) between 2 peaks of the SDC. Results. In severe AS patients, the MD between two peaks of the SDC and the SP of the center of feet pressure were significantly higher than controls during both EO and EC conditions. The MP was significantly reduced just on EC. Conclusions. Ankylosing spondylitis exerts negative effect on postural stability, not compensable by visual inputs. Our findings may be useful in the rehabilitative management of the increased risk of falling in AS. PMID:25821831

  19. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  20. Using Scientific Visualization to Represent Soil Hydrology Dynamics

    ERIC Educational Resources Information Center

    Dolliver, H. A. S.; Bell, J. C.

    2006-01-01

    Understanding the relationships between soil, landscape, and hydrology is important for making sustainable land management decisions. In this study, scientific visualization was explored as a means to visually represent the complex spatial and temporal variations in the hydrologic status of soils. Soil hydrology data was collected at seven…

Top