Registering Ground and Satellite Imagery for Visual Localization
2012-08-01
reckoning, inertial, stereo, light detection and ranging ( LIDAR ), cellular radio, and visual. As no sensor or algorithm provides perfect localization in...by metric localization approaches to confine the region of a map that needs to be searched. Simultaneous Localization and Mapping ( SLAM ) (5, 6), using...estimate the metric location of the camera. Se et al. (7) use SIFT features for both appearance-based global localization and incremental 3D SLAM . Johns and
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
[Multifocal visual electrophysiology in visual function evaluation].
Peng, Shu-Ya; Chen, Jie-Min; Liu, Rui-Jue; Zhou, Shu; Liu, Dong-Mei; Xia, Wen-Tao
2013-08-01
Multifocal visual electrophysiology, consisting of multifocal electroretinography (mfERG) and multifocal visual evoked potential (mfVEP), can objectively evaluate retina function and retina-cortical conduction pathway status by stimulating many local retinal regions and obtaining each local response simultaneously. Having many advantages such as short testing time and high sensitivity, it has been widely used in clinical ophthalmology, especially in the diagnosis of retinal disease and glaucoma. It is a new objective technique in clinical forensic medicine involving visual function evaluation of ocular trauma in particular. This article summarizes the way of stimulation, the position of electrodes, the way of analysis, the visual function evaluation of mfERG and mfVEP, and discussed the value of multifocal visual electrophysiology in forensic medicine.
Riedl, Valentin; Bienkowska, Katarzyna; Strobel, Carola; Tahmasian, Masoud; Grimmer, Timo; Förster, Stefan; Friston, Karl J; Sorg, Christian; Drzezga, Alexander
2014-04-30
Over the last decade, synchronized resting-state fluctuations of blood oxygenation level-dependent (BOLD) signals between remote brain areas [so-called BOLD resting-state functional connectivity (rs-FC)] have gained enormous relevance in systems and clinical neuroscience. However, the neural underpinnings of rs-FC are still incompletely understood. Using simultaneous positron emission tomography/magnetic resonance imaging we here directly investigated the relationship between rs-FC and local neuronal activity in humans. Computational models suggest a mechanistic link between the dynamics of local neuronal activity and the functional coupling among distributed brain regions. Therefore, we hypothesized that the local activity (LA) of a region at rest determines its rs-FC. To test this hypothesis, we simultaneously measured both LA (glucose metabolism) and rs-FC (via synchronized BOLD fluctuations) during conditions of eyes closed or eyes open. During eyes open, LA increased in the visual system, and the salience network (i.e., cingulate and insular cortices) and the pattern of elevated LA coincided almost exactly with the spatial pattern of increased rs-FC. Specifically, the voxelwise regional profile of LA in these areas strongly correlated with the regional pattern of rs-FC among the same regions (e.g., LA in primary visual cortex accounts for ∼ 50%, and LA in anterior cingulate accounts for ∼ 20% of rs-FC with the visual system). These data provide the first direct evidence in humans that local neuronal activity determines BOLD FC at rest. Beyond its relevance for the neuronal basis of coherent BOLD signal fluctuations, our procedure may translate into clinical research particularly to investigate potentially aberrant links between local dynamics and remote functional coupling in patients with neuropsychiatric disorders.
NASA Astrophysics Data System (ADS)
Misra, S. K.; Mukherjee, P.; Ohoka, A.; Schwartz-Duval, A. S.; Tiwari, S.; Bhargava, R.; Pan, D.
2016-01-01
Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles.Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles. Electronic supplementary information (ESI) available: Raman and confocal images of the Deuto-DOX-NPs in cells, materials and details of methods. See DOI: 10.1039/c5nr07975f
Heers, Marcel; Hirschmann, Jan; Jacobs, Julia; Dümpelmann, Matthias; Butz, Markus; von Lehe, Marec; Elger, Christian E; Schnitzler, Alfons; Wellmer, Jörg
2014-09-01
Spike-based magnetoencephalography (MEG) source localization is an established method in the presurgical evaluation of epilepsy patients. Focal cortical dysplasias (FCDs) are associated with focal epileptic discharges of variable morphologies in the beta frequency band in addition to single epileptic spikes. Therefore, we investigated the potential diagnostic value of MEG-based localization of spike-independent beta band (12-30Hz) activity generated by epileptogenic lesions. Five patients with FCD IIB underwent MEG. In one patient, invasive EEG (iEEG) was recorded simultaneously with MEG. In two patients, iEEG succeeded MEG, and two patients had MEG only. MEG and iEEG were evaluated for epileptic spikes. Two minutes of iEEG data and MEG epochs with no spikes as well as MEG epochs with epileptic spikes were analyzed in the frequency domain. MEG oscillatory beta band activity was localized using Dynamic Imaging of Coherent Sources. Intralesional beta band activity was coherent between simultaneous MEG and iEEG recordings. Continuous 14Hz beta band power correlated with the rate of interictal epileptic discharges detected in iEEG. In cases where visual MEG evaluation revealed epileptic spikes, the sources of beta band activity localized within <2cm of the epileptogenic lesion as shown on magnetic resonance imaging. This result held even when visually marked epileptic spikes were deselected. When epileptic spikes were detectable in iEEG but not MEG, MEG beta band activity source localization failed. Source localization of beta band activity has the potential to contribute to the identification of epileptic foci in addition to source localization of visually marked epileptic spikes. Thus, this technique may assist in the localization of epileptic foci in patients with suspected FCD. Copyright © 2014 Elsevier B.V. All rights reserved.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation
2012-01-01
consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Christmann, V; Rosenberg, J; Seega, J; Lehr, C M
1997-08-01
Bioavailability of orally administered drugs is much influenced by the behavior, performance and fate of the dosage form within the gastrointestinal (GI) tract. Therefore, MRI in vivo methods that allow for the simultaneous visualization of solid oral dosage forms and anatomical structures of the GI tract have been investigated. Oral contrast agents containing Gd-DTPA were used to depict the lumen of the digestive organs. Solid oral dosage forms were visualized in a rat model by a 1H-MRI double contrast technique (magnetite-labelled microtablets) and a combination of 1H- and 19F-MRI (fluorine-labelled minicapsules). Simultaneous visualization of solid oral dosage forms and the GI environment in the rat was possible using MRI. Microtablets could reproducibly be monitored in the rat stomach and in the intestines using a 1H-MRI double contrast technique. Fluorine-labelled minicapsules were detectable in the rat stomach by a combination of 1H- and 19F-MRI in vivo. The in vivo 1H-MRI double contrast technique described allows solid oral dosage forms in the rat GI tract to be depicted. Solid dosage forms can easily be labelled by incorporating trace amounts of non-toxic iron oxide (magnetite) particles. 1H-MRI is a promising tool for observing such pharmaceutical dosage forms in humans. Combined 1H- and 19F-MRI offer a means of unambiguously localizing solid oral dosage forms in more distal parts of the GI tract. Studies correlating MRI examinations with drug plasma levels could provide valuable information for the development of pharmaceutical dosage forms.
L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann
2005-01-01
In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...
Do rats use shape to solve “shape discriminations”?
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
NASA Astrophysics Data System (ADS)
Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger
2017-09-01
New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations
A survey of simultaneous localization and mapping on unstructured lunar complex environment
NASA Astrophysics Data System (ADS)
Wang, Yiqiao; Zhang, Wei; An, Pei
2017-10-01
Simultaneous localization and mapping (SLAM) technology is the key to realizing lunar rover's intelligent perception and autonomous navigation. It embodies the autonomous ability of mobile robot, and has attracted plenty of concerns of researchers in the past thirty years. Visual sensors are meaningful to SLAM research because they can provide a wealth of information. Visual SLAM uses merely images as external information to estimate the location of the robot and construct the environment map. Nowadays, SLAM technology still has problems when applied in large-scale, unstructured and complex environment. Based on the latest technology in the field of visual SLAM, this paper investigates and summarizes the SLAM technology using in the unstructured complex environment of lunar surface. In particular, we focus on summarizing and comparing the detection and matching of features of SIFT, SURF and ORB, in the meanwhile discussing their advantages and disadvantages. We have analyzed the three main methods: SLAM Based on Extended Kalman Filter, SLAM Based on Particle Filter and SLAM Based on Graph Optimization (EKF-SLAM, PF-SLAM and Graph-based SLAM). Finally, this article summarizes and discusses the key scientific and technical difficulties in the lunar context that Visual SLAM faces. At the same time, we have explored the frontier issues such as multi-sensor fusion SLAM and multi-robot cooperative SLAM technology. We also predict and prospect the development trend of lunar rover SLAM technology, and put forward some ideas of further research.
Stimulus-dependent spiking relationships with the EEG
Snyder, Adam C.
2015-01-01
The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954
Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.
Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan
2016-12-01
The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.
Frequency spectrum might act as communication code between retina and visual cortex I
Yang, Xu; Gong, Bo; Lu, Jian-Wei
2015-01-01
AIM To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). METHODS Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1. PMID:26682156
Frequency spectrum might act as communication code between retina and visual cortex I.
Yang, Xu; Gong, Bo; Lu, Jian-Wei
2015-01-01
To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
NASA Astrophysics Data System (ADS)
Vedyaykin, A. D.; Gorbunov, V. V.; Sabantsev, A. V.; Polinovskaya, V. S.; Vishnyakov, I. E.; Melnikov, A. S.; Serdobintsev, P. Yu; Khodorkovskii, M. A.
2015-11-01
Localization microscopy allows visualization of biological structures with resolution well below the diffraction limit. Localization microscopy was used to study FtsZ organization in Escherichia coli previously in combination with fluorescent protein labeling, but the fact that fluorescent chimeric protein was unable to rescue temperature-sensitive ftsZ mutants suggests that obtained images may not represent native FtsZ structures faithfully. Indirect immunolabeling of FtsZ not only overcomes this problem, but also allows the use of the powerful visualization methods arsenal available for different structures in fixed cells. In this work we simultaneously obtained super-resolution images of FtsZ structures and diffraction-limited or super-resolution images of DNA and cell surface in E. coli, which allows for the study of the spatial arrangement of FtsZ structures with respect to the nucleoid positions and septum formation.
Kottlow, Mara; Jann, Kay; Dierks, Thomas; Koenig, Thomas
2012-08-01
Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects. We presented unpredictably moving face parts (NOFACE) which - during some periods - produced a complete schematic face (FACE). The amount of zero-lag phase synchronization was measured using global field synchronization (GFS). GFS provides global information on the amount of instantaneous coincidences in specific frequencies throughout the brain. Gamma GFS was increased during the FACE condition. To localize the underlying areas, we correlated gamma GFS with simultaneously recorded BOLD responses. Positive correlates comprised the bilateral middle fusiform gyrus and the left precuneus. These areas may form a network of areas transiently synchronized during face integration, including face-specific as well as binding-specific regions and regions for visual processing in general. Thus, the amount of zero-lag phase synchronization between remote regions of the human visual system can be measured with simultaneously acquired EEG/fMRI. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Auditory and visual interactions between the superior and inferior colliculi in the ferret.
Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K
2015-05-01
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Using neuronal populations to study the mechanisms underlying spatial and feature attention
Cohen, Marlene R.; Maunsell, John H.R.
2012-01-01
Summary Visual attention affects both perception and neuronal responses. Whether the same neuronal mechanisms mediate spatial attention, which improves perception of attended locations, and non-spatial forms of attention has been a subject of considerable debate. Spatial and feature attention have similar effects on individual neurons. Because visual cortex is retinotopically organized, however, spatial attention can co-modulate local neuronal populations, while feature attention generally requires more selective modulation. We compared the effects of feature and spatial attention on local and spatially separated populations by recording simultaneously from dozens of neurons in both hemispheres of V4. Feature and spatial attention affect the activity of local populations similarly, modulating both firing rates and correlations between pairs of nearby neurons. However, while spatial attention appears to act on local populations, feature attention is coordinated across hemispheres. Our results are consistent with a unified attentional mechanism that can modulate the responses of arbitrary subgroups of neurons. PMID:21689604
Compression and reflection of visually evoked cortical waves
Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young
2007-01-01
Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
ERIC Educational Resources Information Center
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…
Two subdivisions of macaque LIP process visual-oculomotor information differently.
Chen, Mo; Li, Bing; Guang, Jing; Wei, Linyu; Wu, Si; Liu, Yu; Zhang, Mingsha
2016-10-11
Although the cerebral cortex is thought to be composed of functionally distinct areas, the actual parcellation of area and assignment of function are still highly controversial. An example is the much-studied lateral intraparietal cortex (LIP). Despite the general agreement that LIP plays an important role in visual-oculomotor transformation, it remains unclear whether the area is primary sensory- or motor-related (the attention-intention debate). Although LIP has been considered as a functionally unitary area, its dorsal (LIPd) and ventral (LIPv) parts differ in local morphology and long-distance connectivity. In particular, LIPv has much stronger connections with two oculomotor centers, the frontal eye field and the deep layers of the superior colliculus, than does LIPd. Such anatomical distinctions imply that compared with LIPd, LIPv might be more involved in oculomotor processing. We tested this hypothesis physiologically with a memory saccade task and a gap saccade task. We found that LIP neurons with persistent memory activities in memory saccade are primarily provoked either by visual stimulation (vision-related) or by both visual and saccadic events (vision-saccade-related) in gap saccade. The distribution changes from predominantly vision-related to predominantly vision-saccade-related as the recording depth increases along the dorsal-ventral dimension. Consistently, the simultaneously recorded local field potential also changes from visual evoked to saccade evoked. Finally, local injection of muscimol (GABA agonist) in LIPv, but not in LIPd, dramatically decreases the proportion of express saccades. With these results, we conclude that LIPd and LIPv are more involved in visual and visual-saccadic processing, respectively.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-07-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.
3D visualization of unsteady 2D airplane wake vortices
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Zheng, Z. C.
1994-01-01
Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.
Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance.
Liebe, Stefanie; Hoerzer, Gregor M; Logothetis, Nikos K; Rainer, Gregor
2012-01-29
Short-term memory requires communication between multiple brain regions that collectively mediate the encoding and maintenance of sensory information. It has been suggested that oscillatory synchronization underlies intercortical communication. Yet, whether and how distant cortical areas cooperate during visual memory remains elusive. We examined neural interactions between visual area V4 and the lateral prefrontal cortex using simultaneous local field potential (LFP) recordings and single-unit activity (SUA) in monkeys performing a visual short-term memory task. During the memory period, we observed enhanced between-area phase synchronization in theta frequencies (3-9 Hz) of LFPs together with elevated phase locking of SUA to theta oscillations across regions. In addition, we found that the strength of intercortical locking was predictive of the animals' behavioral performance. This suggests that theta-band synchronization coordinates action potential communication between V4 and prefrontal cortex that may contribute to the maintenance of visual short-term memories.
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
NASA Astrophysics Data System (ADS)
Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang
2007-03-01
The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.
Visual adaptation dominates bimodal visual-motor action adaptation
de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.
2016-01-01
A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781
David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K
2011-10-01
Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.
Enhanced Access to Early Visual Processing of Perceptual Simultaneity in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Falter, Christine M.; Braeutigam, Sven; Nathan, Roger; Carrington, Sarah; Bailey, Anthony J.
2013-01-01
We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset delays. Participants with ASD distinguished…
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
Jahns, Anika C; Oprica, Cristina; Vassilaki, Ismini; Golovleva, Irina; Palmer, Ruth H; Alexeyev, Oleg A
2013-10-01
Propionibacterium acnes (P. acnes) and Propionibacterium granulosum (P. granulosum) are common skin colonizers that are implicated as possible contributing factors in acne vulgaris development. We have established direct visualization tools for the simultaneous detection of these closely related species with immunofluorescence assay and fluorescence in situ hybridization (FISH). As proof of principle, we were able to distinguish P. acnes and P. granulosum bacteria in multi-species populations in vitro as well as in a mock skin infection model upon labelling with 16S rRNA probes in combinatorial FISH as well as with antibodies. Furthermore, we report the co-localization of P. acnes and P. granulosum in the stratum corneum and hair follicles from patients with acne vulgaris as well as in healthy individuals. Further studies on the spatial distribution of these bacteria in skin structures in various skin disorders are needed. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Noland, Mildred Jean
A study was conducted investigating whether a sequence of visuals presented in a serial manner differs in connotative meaning from the same set of visuals presented simultaneously. How the meanings of pairs of shots relate to their constituent visuals was also explored. Sixteen pairs of visuals were presented to both male and female subjects in…
Hereditary Angioedema Attacks: Local Swelling at Multiple Sites.
Hofman, Zonne L M; Relan, Anurag; Hack, C Erik
2016-02-01
Hereditary angioedema (HAE) patients experience recurrent local swelling in various parts of the body including painful swelling of the intestine and life-threatening laryngeal oedema. Most HAE literature is about attacks located in one anatomical site, though it is mentioned that HAE attacks may also involve multiple anatomical sites simultaneously. A detailed description of such multi-location attacks is currently lacking. This study investigated the occurrence, severity and clinical course of HAE attacks with multiple anatomical locations. HAE patients included in a clinical database of recombinant human C1-inhibitor (rhC1INH) studies were evaluated. Visual analog scale scores filled out by the patients for various symptoms at various locations and investigator symptoms scores during the attack were analysed. Data of 219 eligible attacks in 119 patients was analysed. Thirty-three patients (28%) had symptoms at multiple locations in anatomically unrelated regions at the same time during their first attack. Up to five simultaneously affected locations were reported. The observation that severe HAE attacks often affect multiple sites in the body suggests that HAE symptoms result from a systemic rather than from a local process as is currently believed.
Integrated photovoltaic (PV) monitoring system
NASA Astrophysics Data System (ADS)
Mahinder Singh, Balbir Singh; Husain, NurSyahidah; Mohamed, Norani Muti
2012-09-01
The main aim of this research work is to design an accurate and reliable monitoring system to be integrated with solar electricity generating system. The performance monitoring system is required to ensure that the PVEGS is operating at an optimum level. The PV monitoring system is able to measure all the important parameters that determine an optimum performance. The measured values are recorded continuously, as the data acquisition system is connected to a computer, and data is stored at fixed intervals. The data can be locally used and can also be transmitted via internet. The data that appears directly on the local monitoring system is displayed via graphical user interface that was created by using Visual basic and Apache software was used for data transmission The accuracy and reliability of the developed monitoring system was tested against the data that captured simultaneously by using a standard power quality analyzer device. The high correlation which is 97% values indicates the level of accuracy of the monitoring system. The aim of leveraging on a system for continuous monitoring system is achieved, both locally, and can be viewed simultaneously at a remote system.
Drawing Connections Across Conceptually Related Visual Representations in Science
NASA Astrophysics Data System (ADS)
Hansen, Janice
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those representations is most effective for learning? And, 3) Can children's ability to process conceptually related science diagrams be enhanced with added support? Three groups of participants, 89 pre-service teachers, 211 adult non-educators, and 385 middle school children, were surveyed about whether they felt related visual representations presented serially or simultaneously would lead to better learning outcomes. Two experiments, one with adults and one with child participants, explored the validity of these beliefs. Pre-service teachers did not endorse either serial or simultaneous related visual representations for their own learning. They were, however, significantly more likely to indicate that children would learn better from serially presented diagrams. In direct contrast to the educators, middle school students believed they would learn better from related visual representations presented simultaneously. Experimental data indicated that the beliefs adult non-educators held about their own learning needs matched learning outcomes. These participants endorsed simultaneous presentation of related diagrams for their own learning. When comparing learning from related diagrams presented simultaneously to learning from the same diagrams presented serially indicate that those in the simultaneously condition were able to create more complex mental models. A second experiment compared children's learning from related diagrams across four randomly-assigned conditions: serial, simultaneous, simultaneous with signaling, and simultaneous with structure mapping support. Providing middle school students with simultaneous related diagrams with support for structure mapping led to a lessened reliance on surface features, and a better understanding of the science concepts presented. These findings suggest that presenting diagrams serially in an effort to reduce cognitive load may not be preferable for learning if making connections across representations, and by extension across science concepts, is desired. Instead, providing simultaneous diagrams with structure mapping support may result in greater attention to the salient relationships between related visual representations as well as between the representations and the science concepts they depict.
Simultaneous mapping of pan and sentinel lymph nodes for real-time image-guided surgery.
Ashitate, Yoshitomo; Hyun, Hoon; Kim, Soon Hee; Lee, Jeong Heon; Henary, Maged; Frangioni, John V; Choi, Hak Soo
2014-01-01
The resection of regional lymph nodes in the basin of a primary tumor is of paramount importance in surgical oncology. Although sentinel lymph node mapping is now the standard of care in breast cancer and melanoma, over 20% of patients require a completion lymphadenectomy. Yet, there is currently no technology available that can image all lymph nodes in the body in real time, or assess both the sentinel node and all nodes simultaneously. In this study, we report an optical fluorescence technology that is capable of simultaneous mapping of pan lymph nodes (PLNs) and sentinel lymph nodes (SLNs) in the same subject. We developed near-infrared fluorophores, which have fluorescence emission maxima either at 700 nm or at 800 nm. One was injected intravenously for identification of all regional lymph nodes in a basin, and the other was injected locally for identification of the SLN. Using the dual-channel FLARE intraoperative imaging system, we could identify and resect all PLNs and SLNs simultaneously. The technology we describe enables simultaneous, real-time visualization of both PLNs and SLNs in the same subject.
Visual EKF-SLAM from Heterogeneous Landmarks †
Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.
2016-01-01
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602
Information extraction during simultaneous motion processing.
Rideaux, Reuben; Edwards, Mark
2014-02-01
When confronted with multiple moving objects the visual system can process them in two stages: an initial stage in which a limited number of signals are processed in parallel (i.e. simultaneously) followed by a sequential stage. We previously demonstrated that during the simultaneous stage, observers could discriminate between presentations containing up to 5 vs. 6 spatially localized motion signals (Edwards & Rideaux, 2013). Here we investigate what information is actually extracted during the simultaneous stage and whether the simultaneous limit varies with the detail of information extracted. This was achieved by measuring the ability of observers to extract varied information from low detail, i.e. the number of signals presented, to high detail, i.e. the actual directions present and the direction of a specific element, during the simultaneous stage. The results indicate that the resolution of simultaneous processing varies as a function of the information which is extracted, i.e. as the information extraction becomes more detailed, from the number of moving elements to the direction of a specific element, the capacity to process multiple signals is reduced. Thus, when assigning a capacity to simultaneous motion processing, this must be qualified by designating the degree of information extraction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
NASA Astrophysics Data System (ADS)
Li, En; Makita, Shuichi; Hong, Young-Joo; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-02-01
A customized 1310-nm Jones-matrix optical coherence tomography (JM-OCT) for dermatological investigation was constructed and used for in vivo normal human skin tissue imaging. This system can simultaneously measure the threedimensional depth-resolved local birefringence, complex-correlation based OCT angiography (OCT-A), degree-ofpolarization- uniformity (DOPU) and scattering OCT intensity. By obtaining these optical properties of tissue, the morphology, vasculature, and collagen content of skin can be deduced and visualized. Structures in the deep layers of the epithelium were observed with depth-resolved local birefringence and polarization uniformity images. These results suggest high diagnostic and investigative potential of JM-OCT for dermatology.
NASA Astrophysics Data System (ADS)
Zhu, Yi; Cai, Zhonghou; Chen, Pice; Zhang, Qingteng; Highland, Matthew J.; Jung, Il Woong; Walko, Donald A.; Dufresne, Eric M.; Jeong, Jaewoo; Samant, Mahesh G.; Parkin, Stuart S. P.; Freeland, John W.; Evans, Paul G.; Wen, Haidan
2016-02-01
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase separated regions. The ability to simultaneously track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of-the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation is initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, and is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO2. The direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.
Zhu, Yi; Cai, Zhonghou; Chen, Pice; Zhang, Qingteng; Highland, Matthew J; Jung, Il Woong; Walko, Donald A; Dufresne, Eric M; Jeong, Jaewoo; Samant, Mahesh G; Parkin, Stuart S P; Freeland, John W; Evans, Paul G; Wen, Haidan
2016-02-26
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase separated regions. The ability to simultaneously track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of-the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation is initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, and is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO2. The direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.
A theta rhythm in macaque visual cortex and its attentional modulation
Spyropoulos, Georgios; Fries, Pascal
2018-01-01
Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632
Alterations in audiovisual simultaneity perception in amblyopia
2017-01-01
Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony. PMID:28598996
Alterations in audiovisual simultaneity perception in amblyopia.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2017-01-01
Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R.; Lambert, Scott R.
2015-01-01
OBJECTIVES To compare the incidence of adverse events, visual outcomes and economic costs of sequential versus simultaneous bilateral cataract surgery for infants with congenital cataracts. METHODS We retrospectively reviewed the incidence of adverse events, visual outcomes and medical payments associated with simultaneous versus sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months of age or younger at our institution. RESULTS Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (p=.25). We found a similar incidence of adverse events between the two treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean absolute interocular difference in logMAR visual acuities between the two treatment groups was 0.47±0.76 for the sequential group and 0.44±0.40 for the simultaneous group (p=.92). Hospital, drugs, supplies and professional payments were on average 21.9% lower per patient in the simultaneous group. CONCLUSIONS Simultaneous bilateral cataract surgery for infants with congenital cataracts was associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcome. PMID:20697007
Visual discrimination of local surface structure: slant, tilt, and curvedness.
Norman, J Farley; Todd, James T; Norman, Hideko F; Clayton, Anna Marie; McBride, T Ryan
2006-03-01
In four experiments, observers were required to discriminate interval or ordinal differences in slant, tilt, or curvedness between designated probe points on randomly shaped curved surfaces defined by shading, texture, and binocular disparity. The results reveal that discrimination thresholds for judgments of slant or tilt typically range between 4 degrees and 10 degrees; that judgments of one component are unaffected by simultaneous variations in the other; and that the individual thresholds for either the slant or tilt components of orientation are approximately equal to those obtained for judgments of the total orientation difference between two probed regions. Performance was much worse, however, for judgments of curvedness, and these judgments were significantly impaired when there were simultaneous variations in the shape index parameter of curvature.
An evaluation of attention models for use in SLAM
NASA Astrophysics Data System (ADS)
Dodge, Samuel; Karam, Lina
2013-12-01
In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.
Falcão, Manuel Sousa; Freitas-Costa, Paulo; Beato, João Nuno; Pinheiro-Costa, João; Rocha-Sousa, Amândio; Carneiro, Ângela; Brandão, Elisete Maria; Falcão-Reis, Fernando
2017-02-27
To evaluate the safety and impact on visual acuity, retinal and choroidal morphology of simultaneous cataract surgery and intravitreal anti-vascular endothelial growth factor on patients with visually significant cataracts and previously treated exudative age-related macular degeneration. Prospective study, which included 21 eyes of 20 patients with exudative age-related macular degeneration submitted to simultaneous phacoemulsification and intravitreal ranibizumab or bevacizumab. The patients were followed for 12 months after surgery using a pro re nata strategy. Visual acuity, foveal and choroidal thickness changes were evaluated 1, 6 and 12 months post-operatively. There was a statistically significant increase in mean visual acuity at one (13.4 letters, p < 0.05), six (11.5 letters, p < 0.05) and twelve months (11.3 letters, p < 0.05) without significant changes in retinal or choroidal morphology. At 12 months, 86% of eyes were able to maintain visual acuity improvement. There were no significant differences between the two anti-vascular endothelial growth factor drugs and no complications developed during follow-up. Simultaneous phacoemulsification and intravitreal anti- vascular endothelial growth factor is safe and allows improvement in visual acuity in patients with visually significant cataracts and exudative age-related macular degeneration. Visual acuity gains were maintained with a pro re nata strategy showing that in this subset of patients, phacoemulsification may be beneficial. Cataract surgery and simultaneous anti-vascular endothelial growth factor therapy improves visual acuity in patients with exudative age-related macular degeneration.
Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K
2017-09-13
Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.
Relationship between visual binding, reentry and awareness.
Koivisto, Mika; Silvanto, Juha
2011-12-01
Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.
Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo
2018-06-15
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
ERIC Educational Resources Information Center
Baldwin, Thomas F.
Man seems unable to retain different information from different senses or channels simultaneously; one channel gains full attention. However, it is hypothesized that if the message elements arriving simultaneously from audio and visual channels are redundant, man will retain the information. An attempt was made to measure redundancy in the audio…
Reduced response cluster size in early visual areas explains the acuity deficit in amblyopia.
Huang, Yufeng; Feng, Lixia; Zhou, Yifeng
2017-05-03
Focal visual stimulation typically results in the activation of a large portion of the early visual cortex. This spread of activity is attributed to long-range lateral interactions. Such long-range interactions may serve to stabilize a visual representation or to simply modulate incoming signals, and any associated dysfunction in long-range activation may reduce sensitivity to visual information in conditions such as amblyopia. We sought to measure the dispersion of cortical activity following local visual stimulation in a group of patients with amblyopia and matched normal. Twenty adult anisometropic amblyopes and 10 normal controls participated in this study. Using a multifocal stimulation, we simultaneously measured cluster sizes to multiple stimulation points in the visual field. We found that the functional MRI (fMRI) response cluster size that corresponded to the fellow eye was significantly larger as opposed to that corresponding to the amblyopic eye and that the fMRI response cluster size at the two more central retinotopic locations correlated with amblyopia acuity deficit. Our results suggest that the amblyopic visual cortex has a diminished long-range communication as evidenced by significantly smaller cluster of activity as measured with fMRI. These results have important implications for models of amblyopia and approaches to treatment.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R
2010-08-01
To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
Simultaneous Multi-Slice fMRI using Spiral Trajectories
Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew
2014-01-01
Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259
Farnum, C E; Wilsman, N J
1984-06-01
A postembedment method for the localization of lectin-binding glycoconjugates was developed using Epon-embedded growth plate cartilage from Yucatan miniature swine. By testing a variety of etching, blocking, and incubation procedures, a standard protocol was developed for 1 micron thick sections that allowed visualization of both intracellular and extracellular glycoconjugates with affinity for wheat germ agglutinin and concanavalin A. Both fluorescent and peroxidase techniques were used, and comparisons were made between direct methods and indirect methods using the biotin-avidin bridging system. Differential extracellular lectin binding allowed visualization of interterritorial , territorial, and pericellular matrices. Double labeling experiments showed the precision with which intracellular binding could be localized to specific cytoplasmic compartments, with resolution of binding to the Golgi apparatus, endoplasmic reticulum, and nuclear membrane at the light microscopic level. This method allows the localization of both intracellular and extracellular lectin-binding glycoconjugates using fixation and embedment procedures that are compatible with simultaneous ultrastructural analysis. As such it should have applicability both to the morphological analysis of growth plate organization during normal endochondral ossification, as well as to the diagnostic pathology of matrix abnormalities in disease states of growing cartilage.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
NASA Astrophysics Data System (ADS)
Moriyoshi, Yasuo; Kobayashi, Shigemi; Enomoto, Yoshiteru
Knock phenomenon in SI engines is regarded as an auto-ignition of unburned end-gas, and it has been widely examined by using rapid compression machines (RCM), shock-tubes or test engines. Recent researches point out the importance of the low temperature chemical reaction and the negative temperature coefficient (NTC). To investigate the effects, analyses of instantaneous local gas temperature, flow visualization and gaseous pressure were conducted in this study. As measurements using real engines are too difficult to analyze, the authors aimed to make measurements using a constant volume vessel under knock conditions where propagating flame exists during the induction time of auto-ignition. Adopting the two-wire thermocouple method enabled us to measure the instantaneous local gas temperature until the moment when the flame front passes by. High-speed images inside the unburned region were also recorded simultaneously using an endoscope. As a result, it was found that when knock occurs, the auto-ignition initiation time seems slightly early compared to the results without knock. This causes a higher volume ratio of unburned mixture and existence of many hot spots and stochastically leads to an initiation of knock.
Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne
2017-02-20
Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Puckett, Yana; Baronia, Benedicto C
2016-09-20
With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.
Temporal characteristics of audiovisual information processing.
Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T
2008-05-14
In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.
NASA Technical Reports Server (NTRS)
Zhang, Neng-Li; Chao, David F.
2001-01-01
A new hybrid optical system, consisting of reflection-refracted shadowgraphy and top-view photography, is used to visualize flow phenomena and simultaneously measure the spreading and instant dynamic contact angle in a volatile-liquid drop on a nontransparent substrate. Thermocapillary convection in the drop, induced by evaporation, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this unique technique clearly reveal that thermocapillary convection strongly affects the spreading process and the characteristics of dynamic contact angle of the drop. Comprehensive information of a sessile drop, including the local contact angle along the periphery, the instability of the three-phase contact line, and the deformation of the drop shape is obtained and analyzed.
Three-dimensional mapping of microcircuit correlation structure
Cotton, R. James; Froudarakis, Emmanouil; Storer, Patrick; Saggau, Peter; Tolias, Andreas S.
2013-01-01
Great progress has been made toward understanding the properties of single neurons, yet the principles underlying interactions between neurons remain poorly understood. Given that connectivity in the neocortex is locally dense through both horizontal and vertical connections, it is of particular importance to characterize the activity structure of local populations of neurons arranged in three dimensions. However, techniques for simultaneously measuring microcircuit activity are lacking. We developed an in vivo 3D high-speed, random-access two-photon microscope that is capable of simultaneous 3D motion tracking. This allows imaging from hundreds of neurons at several hundred Hz, while monitoring tissue movement. Given that motion will induce common artifacts across the population, accurate motion tracking is absolutely necessary for studying population activity with random-access based imaging methods. We demonstrate the potential of this imaging technique by measuring the correlation structure of large populations of nearby neurons in the mouse visual cortex, and find that the microcircuit correlation structure is stimulus-dependent. Three-dimensional random access multiphoton imaging with concurrent motion tracking provides a novel, powerful method to characterize the microcircuit activity in vivo. PMID:24133414
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Microperimetry in patients with central serous retinopathy.
Toonen, F; Remky, A; Janssen, V; Wolf, S; Reim, M
1995-09-01
In patients with acute central serous retinopathy (CSR), evaluation of visual acuity alone may not represent visual function. In patients with acute CSR, visual function may be disturbed by localized scotomas, distortion, and waviness. For the assessment of localized light sensitivity and stability of fixation, patients with CSR were evaluated by fundus perimetry with a scanning laser ophthalmoscope (SLO 101, Rodenstock Instruments). In all, 21 patients with acute CSR and 19 healthy volunteers were included in the study. Diagnosis of CSR was established by ophthalmoscopy and digital video fluorescein angiography. All patients and volunteers underwent static suprathreshold perimetry with the SLO. Light sensitivity was quantified by presenting stimuli with different light intensities (intensity, 0-27.9 dB above background; size, Goldmann III; wavelength, 633 nm) using an automatic staircase strategy. Stimuli were presented with simultaneous real-time monitoring of the retina. Fixation stability was quantified by measuring the area encompassing 75% of all points of fixation. Light sensitivity was 18-20 dB in affected areas, whereas in healthy eyes and outside the affected area, values of 22-24 dB were obtained. Fixation stability was significantly decreased in the affected eye as compared with normal eyes (33 +/- 12 versus 21 +/- 4 min of arc; P < 0.01). Static perimetry with an SLO is a useful technique for the assessment of localized light sensitivity and fixation stability in patients with macular disease. This technique could provide helpful information in the management of CSR.
Visual brain activity patterns classification with simultaneous EEG-fMRI: A multimodal approach.
Ahmad, Rana Fayyaz; Malik, Aamir Saeed; Kamel, Nidal; Reza, Faruque; Amin, Hafeez Ullah; Hussain, Muhammad
2017-01-01
Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful. In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes. Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature. The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.
New Clinically Feasible 3T MRI Protocol to Discriminate Internal Brain Stem Anatomy.
Hoch, M J; Chung, S; Ben-Eliezer, N; Bruno, M T; Fatterpekar, G M; Shepherd, T M
2016-06-01
Two new 3T MR imaging contrast methods, track density imaging and echo modulation curve T2 mapping, were combined with simultaneous multisection acquisition to reveal exquisite anatomic detail at 7 canonical levels of the brain stem. Compared with conventional MR imaging contrasts, many individual brain stem tracts and nuclear groups were directly visualized for the first time at 3T. This new approach is clinically practical and feasible (total scan time = 20 minutes), allowing better brain stem anatomic localization and characterization. © 2016 by American Journal of Neuroradiology.
Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music
NASA Astrophysics Data System (ADS)
Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki
2016-02-01
We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.
Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music.
Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki
2016-02-01
We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Baronia, Benedicto C
2016-01-01
With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Figure-ground modulation in awake primate thalamus.
Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M
2015-06-02
Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.
Figure-ground modulation in awake primate thalamus
Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.
2015-01-01
Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Improvement in conduction velocity after optic neuritis measured with the multifocal VEP.
Yang, E Bo; Hood, Donald C; Rodarte, Chris; Zhang, Xian; Odel, Jeffrey G; Behrens, Myles M
2007-02-01
To test the efficacy of the multifocal visual evoked potential (mfVEP) technique after long-term latency changes in optic neuritis (ON)/multiple sclerosis (MS), mfVEPs were recorded in 12 patients with ON/MS. Sixty local VEP responses were recorded simultaneously. mfVEP was recorded from both eyes of 12 patients with ON/MS. Patients were tested twice after recovery from acute ON episodes, which occurred in 14 of the 24 eyes. After recovery, all eyes had 20/20 or better visual acuity and normal visual fields as measured with static automated perimetry (SAP). The time between the two postrecovery tests varied from 6 to 56 months. Between test days, the visual fields obtained with SAP remained normal. Ten of the 14 affected eyes showed improvement in median latency on the mfVEP. Six of these eyes fell at or below (improved latency) the 96% confidence interval for the control eyes. None of the 10 initially unaffected eyes fell below the 96% lower limit. Although the improvement was widespread across the field, it did not include all regions. For the six eyes showing clear improvement, on average, 78% of the points had latencies that were shorter on test 2 than on test 1. A substantial percentage of ON/MS patients show a long-term improvement in conduction velocity. Because this improvement can be local, the mfVEP should allow these improvements to be monitored in patients with ON/MS.
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
Simultaneous Visual Discrimination in Asian Elephants
ERIC Educational Resources Information Center
Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan
2005-01-01
Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…
Self-interference 3D super-resolution microscopy for deep tissue investigations.
Bon, Pierre; Linarès-Loyez, Jeanne; Feyeux, Maxime; Alessandri, Kevin; Lounis, Brahim; Nassoy, Pierre; Cognet, Laurent
2018-06-01
Fluorescence localization microscopy has achieved near-molecular resolution capable of revealing ultra-structures, with a broad range of applications, especially in cellular biology. However, it remains challenging to attain such resolution in three dimensions and inside biological tissues beyond the first cell layer. Here we introduce SELFI, a framework for 3D single-molecule localization within multicellular specimens and tissues. The approach relies on self-interference generated within the microscope's point spread function (PSF) to simultaneously encode equiphase and intensity fluorescence signals, which together provide the 3D position of an emitter. We combined SELFI with conventional localization microscopy to visualize F-actin 3D filament networks and reveal the spatial distribution of the transcription factor OCT4 in human induced pluripotent stem cells at depths up to 50 µm inside uncleared tissue spheroids. SELFI paves the way to nanoscale investigations of native cellular processes in intact tissues.
Nanoscale visualization of redox activity at lithium-ion battery cathodes.
Takahashi, Yasufumi; Kumatani, Akichika; Munakata, Hirokazu; Inomata, Hirotaka; Ito, Komachi; Ino, Kosuke; Shiku, Hitoshi; Unwin, Patrick R; Korchev, Yuri E; Kanamura, Kiyoshi; Matsue, Tomokazu
2014-11-17
Intercalation and deintercalation of lithium ions at electrode surfaces are central to the operation of lithium-ion batteries. Yet, on the most important composite cathode surfaces, this is a rather complex process involving spatially heterogeneous reactions that have proved difficult to resolve with existing techniques. Here we report a scanning electrochemical cell microscope based approach to define a mobile electrochemical cell that is used to quantitatively visualize electrochemical phenomena at the battery cathode material LiFePO4, with resolution of ~100 nm. The technique measures electrode topography and different electrochemical properties simultaneously, and the information can be combined with complementary microscopic techniques to reveal new perspectives on structure and activity. These electrodes exhibit highly spatially heterogeneous electrochemistry at the nanoscale, both within secondary particles and at individual primary nanoparticles, which is highly dependent on the local structure and composition.
Spatio-Temporal Metabolite Profiling of the Barley Germination Process by MALDI MS Imaging
Gorzolka, Karin; Kölling, Jan; Nattkemper, Tim W.; Niehaus, Karsten
2016-01-01
MALDI mass spectrometry imaging was performed to localize metabolites during the first seven days of the barley germination. Up to 100 mass signals were detected of which 85 signals were identified as 48 different metabolites with highly tissue-specific localizations. Oligosaccharides were observed in the endosperm and in parts of the developed embryo. Lipids in the endosperm co-localized in dependency on their fatty acid compositions with changes in the distributions of diacyl phosphatidylcholines during germination. 26 potentially antifungal hordatines were detected in the embryo with tissue-specific localizations of their glycosylated, hydroxylated, and O-methylated derivates. In order to reveal spatio-temporal patterns in local metabolite compositions, multiple MSI data sets from a time series were analyzed in one batch. This requires a new preprocessing strategy to achieve comparability between data sets as well as a new strategy for unsupervised clustering. The resulting spatial segmentation for each time point sample is visualized in an interactive cluster map and enables simultaneous interactive exploration of all time points. Using this new analysis approach and visualization tool germination-dependent developments of metabolite patterns with single MS position accuracy were discovered. This is the first study that presents metabolite profiling of a cereals’ germination process over time by MALDI MSI with the identification of a large number of peaks of agronomically and industrially important compounds such as oligosaccharides, lipids and antifungal agents. Their detailed localization as well as the MS cluster analyses for on-tissue metabolite profile mapping revealed important information for the understanding of the germination process, which is of high scientific interest. PMID:26938880
2011-01-01
Background To make sense out of gene expression profiles, such analyses must be pushed beyond the mere listing of affected genes. For example, if a group of genes persistently display similar changes in expression levels under particular experimental conditions, and the proteins encoded by these genes interact and function in the same cellular compartments, this could be taken as very strong indicators for co-regulated protein complexes. One of the key requirements is having appropriate tools to detect such regulatory patterns. Results We have analyzed the global adaptations in gene expression patterns in the budding yeast when the Hsp90 molecular chaperone complex is perturbed either pharmacologically or genetically. We integrated these results with publicly accessible expression, protein-protein interaction and intracellular localization data. But most importantly, all experimental conditions were simultaneously and dynamically visualized with an animation. This critically facilitated the detection of patterns of gene expression changes that suggested underlying regulatory networks that a standard analysis by pairwise comparison and clustering could not have revealed. Conclusions The results of the animation-assisted detection of changes in gene regulatory patterns make predictions about the potential roles of Hsp90 and its co-chaperone p23 in regulating whole sets of genes. The simultaneous dynamic visualization of microarray experiments, represented in networks built by integrating one's own experimental with publicly accessible data, represents a powerful discovery tool that allows the generation of new interpretations and hypotheses. PMID:21672238
Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio
2015-02-19
Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakakita, K.
2017-02-01
Simultaneous visualization technique of the combination of the unsteady Pressure-Sensitive Paint and the Schlieren measurement was introduced. It was applied to a wind tunnel test of a rocket faring model at the JAXA 2mx2m transonic wind tunnel. Quantitative unsteady pressure field was acquired by the unsteady PSP measurement, which consisted of a high-speed camera, high-power laser diode, and so on. Qualitative flow structure was acquired by the Schlieren measurement using a high-speed camera and Xenon lamp with a blue optical filter. Simultaneous visualization was achieved 1.6 kfps frame rate and it gave the detailed structure of unsteady flow fields caused by the unsteady shock wave oscillation due to shock-wave/boundary-layer interaction around the juncture between cone and cylinder on the model. Simultaneous measurement results were merged into a movie including surface pressure distribution on the rocket faring and spatial structure of shock wave system concerning to transonic buffet. Constructed movie gave a timeseries and global information of transonic buffet flow field on the rocket faring model visually.
Choice-reaction time to visual motion with varied levels of simultaneous rotary motion
NASA Technical Reports Server (NTRS)
Clark, B.; Stewart, J. D.
1974-01-01
Twelve airline pilots were studied to determine the effects of whole-body rotation on choice-reaction time to the horizontal motion of a line on a cathode-ray tube. On each trial, one of five levels of visual acceleration and five corresponding proportions of rotary acceleration were presented simultaneously. Reaction time to the visual motion decreased with increasing levels of visual motion and increased with increasing proportions of rotary acceleration. The results conflict with general theories of facilitation during double stimulation but are consistent with neural-clock model of sensory interaction in choice-reaction time.
Long-term music training modulates the recalibration of audiovisual simultaneity.
Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin
2018-07-01
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Williams, James K.; Entenberg, David; Wang, Yarong; Avivar-Valderas, Alvaro; Padgen, Michael; Clark, Ashley; Aguirre-Ghiso, Julio A.; Castracane, James; Condeelis, John S.
2016-01-01
ABSTRACT The tumor microenvironment is recognized as playing a significant role in the behavior of tumor cells and their progression to metastasis. However, tools to manipulate the tumor microenvironment directly, and image the consequences of this manipulation with single cell resolution in real time in vivo, are lacking. We describe here a method for the direct, local manipulation of microenvironmental parameters through the use of an implantable Induction Nano Intravital Device (iNANIVID) and simultaneous in vivo visualization of the results at single-cell resolution. As a proof of concept, we deliver both a sustained dose of EGF to tumor cells while intravital imaging their chemotactic response as well as locally induce hypoxia in defined microenvironments in solid tumors. PMID:27790386
Robust feature tracking for endoscopic pose estimation and structure recovery
NASA Astrophysics Data System (ADS)
Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.
2013-03-01
Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.
Simultaneous EEG/fMRI analysis of the resonance phenomena in steady-state visual evoked responses.
Bayram, Ali; Bayraktaroglu, Zubeyir; Karahan, Esin; Erdogan, Basri; Bilgic, Basar; Ozker, Muge; Kasikci, Itir; Duru, Adil D; Ademoglu, Ahmet; Oztürk, Cengizhan; Arikan, Kemal; Tarhan, Nevzat; Demiralp, Tamer
2011-04-01
The stability of the steady-state visual evoked potentials (SSVEPs) across trials and subjects makes them a suitable tool for the investigation of the visual system. The reproducible pattern of the frequency characteristics of SSVEPs shows a global amplitude maximum around 10 Hz and additional local maxima around 20 and 40 Hz, which have been argued to represent resonant behavior of damped neuronal oscillators. Simultaneous electroencephalogram/functional magnetic resonance imaging (EEG/fMRI) measurement allows testing of the resonance hypothesis about the frequency-selective increases in SSVEP amplitudes in human subjects, because the total synaptic activity that is represented in the fMRI-Blood Oxygen Level Dependent (fMRI-BOLD) response would not increase but get synchronized at the resonance frequency. For this purpose, 40 healthy volunteers were visually stimulated with flickering light at systematically varying frequencies between 6 and 46 Hz, and the correlations between SSVEP amplitudes and the BOLD responses were computed. The SSVEP frequency characteristics of all subjects showed 3 frequency ranges with an amplitude maximum in each of them, which roughly correspond to alpha, beta and gamma bands of the EEG. The correlation maps between BOLD responses and SSVEP amplitude changes across the different stimulation frequencies within each frequency band showed no significant correlation in the alpha range, while significant correlations were obtained in the primary visual area for the beta and gamma bands. This non-linear relationship between the surface recorded SSVEP amplitudes and the BOLD responses of the visual cortex at stimulation frequencies around the alpha band supports the view that a resonance at the tuning frequency of the thalamo-cortical alpha oscillator in the visual system is responsible for the global amplitude maximum of the SSVEP around 10 Hz. Information gained from the SSVEP/fMRI analyses in the present study might be extrapolated to the EEG/fMRI analysis of the transient event-related potentials (ERPs) in terms of expecting more reliable and consistent correlations between EEG and fMRI responses, when the analyses are carried out on evoked or induced oscillations (spectral perturbations) in separate frequency bands instead of the time-domain ERP peaks.
Interactions of Prosthetic and Natural Vision in Animals With Local Retinal Degeneration
Lorach, Henri; Lei, Xin; Galambos, Ludwig; Kamins, Theodore; Mathieson, Keith; Dalal, Roopa; Huie, Philip; Harris, James; Palanker, Daniel
2015-01-01
Purpose Prosthetic restoration of partial sensory loss leads to interactions between artificial and natural inputs. Ideally, the rehabilitation should allow perceptual fusion of the two modalities. Here we studied the interactions between normal and prosthetic vision in a rodent model of local retinal degeneration. Methods Implantation of a photovoltaic array in the subretinal space of normally sighted rats induced local degeneration of the photoreceptors above the chip, and the inner retinal neurons in this area were electrically stimulated by the photovoltaic implant powered by near-infrared (NIR) light. We studied prosthetic and natural visually evoked potentials (VEP) in response to simultaneous stimulation by NIR and visible light patterns. Results We demonstrate that electrical and natural VEPs summed linearly in the visual cortex, and both responses decreased under brighter ambient light. Responses to visible light flashes increased over 3 orders of magnitude of contrast (flash/background), while for electrical stimulation the contrast range was limited to 1 order of magnitude. The maximum amplitude of the prosthetic VEP was three times lower than the maximum response to a visible flash over the same area on the retina. Conclusions Ambient light affects prosthetic responses, albeit much less than responses to visible stimuli. Prosthetic representation of contrast in the visual scene can be encoded, to a limited extent, by the appropriately calibrated stimulus intensity, which also depends on the ambient light conditions. Such calibration will be important for patients combining central prosthetic vision with natural peripheral sight, such as in age-related macular degeneration. PMID:26618643
Rudkin, Adam K; Gray, Tim L; Awadalla, Mona; Craig, Jamie E
2010-10-01
We present a case of a 63-year-old woman who presented to an ED with bifrontal headache, nausea and vomiting and reduced visual acuity. Examination revealed bilateral elevated intraocular pressures, corneal haze, shallow anterior chambers and poorly reactive, mid-dilated pupils. Diagnosis was made of simultaneous bilateral acute angle closure glaucoma. A complete drug history revealed that she had been using an over-the-counter cold and flu remedy whose active ingredients included atropa belladonna, an herb with anticholinergic properties. It is likely that drug-induced dilatation of the individual's pupils precipitated this angle closure emergency. In the report we discuss the risk factors for angle closure glaucoma, and review the local and systemic drugs known to trigger this sight-threatening emergency. © 2010 The Authors. Emergency Medicine Australasia © 2010 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Wilson, Tony W; McDermott, Timothy J; Mills, Mackenzie S; Coolidge, Nathan M; Heinrichs-Graham, Elizabeth
2018-05-01
Transcranial direct-current stimulation (tDCS) is now a widely used method for modulating the human brain, but the resulting physiological effects are not understood. Recent studies have combined magnetoencephalography (MEG) with simultaneous tDCS to evaluate online changes in occipital alpha and gamma oscillations, but no study to date has quantified the offline (i.e., after tDCS) alterations in these responses. Thirty-five healthy adults received active or sham anodal tDCS to the occipital cortices, and then completed a visual stimulation paradigm during MEG that is known to elicit robust gamma and alpha oscillations. The resulting MEG data were imaged and peak voxel time series were extracted to evaluate tDCS effects. We found that tDCS to the occipital increased the amplitude of local gamma oscillations, and basal alpha levels during the baseline. tDCS was also associated with network-level effects, including increased gamma oscillations in the prefrontal cortex, parietal, and other visual attention regions. Finally, although tDCS did not modulate peak gamma frequency, this variable was inversely correlated with gamma amplitude, which is consistent with a GABA-gamma link. In conclusion, tDCS alters gamma oscillations and basal alpha levels. The net offline effects on gamma activity are consistent with the view that anodal tDCS decreases local GABA.
ERIC Educational Resources Information Center
Marks, William J.; Jones, W. Paul; Loe, Scott A.
2013-01-01
This study investigated the use of compressed speech as a modality for assessment of the simultaneous processing function for participants with visual impairment. A 24-item compressed speech test was created using a sound editing program to randomly remove sound elements from aural stimuli, holding pitch constant, with the objective to emulate the…
Knoblauch, Andreas; Palm, Günther
2002-09-01
To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.
On the effects of multimodal information integration in multitasking.
Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian
2017-07-07
There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).
Three main paradigms of simultaneous localization and mapping (SLAM) problem
NASA Astrophysics Data System (ADS)
Imani, Vandad; Haataja, Keijo; Toivanen, Pekka
2018-04-01
Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
The Pivotal Role of the Right Parietal Lobe in Temporal Attention.
Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella
2017-05-01
The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.
Kobayashi, Takuma; Haruta, Makito; Sasagawa, Kiyotaka; Matsumata, Miho; Eizumi, Kawori; Kitsumoto, Chikara; Motoyama, Mayumi; Maezawa, Yasuyo; Ohta, Yasumi; Noda, Toshihiko; Tokuda, Takashi; Ishikawa, Yasuyuki; Ohta, Jun
2016-01-01
To better understand the brain function based on neural activity, a minimally invasive analysis technology in a freely moving animal is necessary. Such technology would provide new knowledge in neuroscience and contribute to regenerative medical techniques and prosthetics care. An application that combines optogenetics for voluntarily stimulating nerves, imaging to visualize neural activity, and a wearable micro-instrument for implantation into the brain could meet the abovementioned demand. To this end, a micro-device that can be applied to the brain less invasively and a system for controlling the device has been newly developed in this study. Since the novel implantable device has dual LEDs and a CMOS image sensor, photostimulation and fluorescence imaging can be performed simultaneously. The device enables bidirectional communication with the brain by means of light. In the present study, the device was evaluated in an in vitro experiment using a new on-chip 3D neuroculture with an extracellular matrix gel and an in vivo experiment involving regenerative medical transplantation and gene delivery to the brain by using both photosensitive channel and fluorescent Ca2+ indicator. The device succeeded in activating cells locally by selective photostimulation, and the physiological Ca2+ dynamics of neural cells were visualized simultaneously by fluorescence imaging. PMID:26878910
NASA Astrophysics Data System (ADS)
Lareau, Etienne; Lesage, Frederic; Pouliot, Philippe; Nguyen, Dang; Le Lan, Jerome; Sawan, Mohamad
2011-09-01
Functional neuroimaging is becoming a valuable tool in cognitive research and clinical applications. The clinical context brings specific constraints that include the requirement of a high channel count to cover the whole head, high sensitivity for single event detection, and portability for long-term bedside monitoring. For epilepsy and stroke monitoring, the combination of electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) is expected to provide useful clinical information, and efforts have been deployed to create prototypes able to simultaneously acquire both measurement modalities. However, to the best of our knowledge, existing systems lack portability, NIRS sensitivity, or have low channel count. We present a battery-powered, portable system with potentially up to 32 EEG channels, 32 NIRS light sources, and 32 detectors. Avalanche photodiodes allow for high NIRS sensitivity and the autonomy of the system is over 24 h. A reduced channel count prototype with 8 EEG channels, 8 sources, and 8 detectors was tested on phantoms. Further validation was done on five healthy adults using a visual stimulation protocol to detect local hemodynamic changes and visually evoked potentials. Results show good concordance with literature regarding functional activations and suggest sufficient performance for clinical use, provided some minor adjustments were made.
NASA Astrophysics Data System (ADS)
Kobayashi, Takuma; Haruta, Makito; Sasagawa, Kiyotaka; Matsumata, Miho; Eizumi, Kawori; Kitsumoto, Chikara; Motoyama, Mayumi; Maezawa, Yasuyo; Ohta, Yasumi; Noda, Toshihiko; Tokuda, Takashi; Ishikawa, Yasuyuki; Ohta, Jun
2016-02-01
To better understand the brain function based on neural activity, a minimally invasive analysis technology in a freely moving animal is necessary. Such technology would provide new knowledge in neuroscience and contribute to regenerative medical techniques and prosthetics care. An application that combines optogenetics for voluntarily stimulating nerves, imaging to visualize neural activity, and a wearable micro-instrument for implantation into the brain could meet the abovementioned demand. To this end, a micro-device that can be applied to the brain less invasively and a system for controlling the device has been newly developed in this study. Since the novel implantable device has dual LEDs and a CMOS image sensor, photostimulation and fluorescence imaging can be performed simultaneously. The device enables bidirectional communication with the brain by means of light. In the present study, the device was evaluated in an in vitro experiment using a new on-chip 3D neuroculture with an extracellular matrix gel and an in vivo experiment involving regenerative medical transplantation and gene delivery to the brain by using both photosensitive channel and fluorescent Ca2+ indicator. The device succeeded in activating cells locally by selective photostimulation, and the physiological Ca2+ dynamics of neural cells were visualized simultaneously by fluorescence imaging.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M.; Molinos, Eduardo J.; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel
2017-01-01
One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. PMID:28397758
Local Versus Global Effects of Isoflurane Anesthesia on Visual Processing in the Fly Brain
2016-01-01
Abstract What characteristics of neural activity distinguish the awake and anesthetized brain? Drugs such as isoflurane abolish behavioral responsiveness in all animals, implying evolutionarily conserved mechanisms. However, it is unclear whether this conservation is reflected at the level of neural activity. Studies in humans have shown that anesthesia is characterized by spatially distinct spectral and coherence signatures that have also been implicated in the global impairment of cortical communication. We questioned whether anesthesia has similar effects on global and local neural processing in one of the smallest brains, that of the fruit fly (Drosophila melanogaster). Using a recently developed multielectrode technique, we recorded local field potentials from different areas of the fly brain simultaneously, while manipulating the concentration of isoflurane. Flickering visual stimuli (‘frequency tags’) allowed us to track evoked responses in the frequency domain and measure the effects of isoflurane throughout the brain. We found that isoflurane reduced power and coherence at the tagging frequency (13 or 17 Hz) in central brain regions. Unexpectedly, isoflurane increased power and coherence at twice the tag frequency (26 or 34 Hz) in the optic lobes of the fly, but only for specific stimulus configurations. By modeling the periodic responses, we show that the increase in power in peripheral areas can be attributed to local neuroanatomy. We further show that the effects on coherence can be explained by impacted signal-to-noise ratios. Together, our results show that general anesthesia has distinct local and global effects on neuronal processing in the fruit fly brain. PMID:27517084
Local Versus Global Effects of Isoflurane Anesthesia on Visual Processing in the Fly Brain.
Cohen, Dror; Zalucki, Oressia H; van Swinderen, Bruno; Tsuchiya, Naotsugu
2016-01-01
What characteristics of neural activity distinguish the awake and anesthetized brain? Drugs such as isoflurane abolish behavioral responsiveness in all animals, implying evolutionarily conserved mechanisms. However, it is unclear whether this conservation is reflected at the level of neural activity. Studies in humans have shown that anesthesia is characterized by spatially distinct spectral and coherence signatures that have also been implicated in the global impairment of cortical communication. We questioned whether anesthesia has similar effects on global and local neural processing in one of the smallest brains, that of the fruit fly (Drosophila melanogaster). Using a recently developed multielectrode technique, we recorded local field potentials from different areas of the fly brain simultaneously, while manipulating the concentration of isoflurane. Flickering visual stimuli ('frequency tags') allowed us to track evoked responses in the frequency domain and measure the effects of isoflurane throughout the brain. We found that isoflurane reduced power and coherence at the tagging frequency (13 or 17 Hz) in central brain regions. Unexpectedly, isoflurane increased power and coherence at twice the tag frequency (26 or 34 Hz) in the optic lobes of the fly, but only for specific stimulus configurations. By modeling the periodic responses, we show that the increase in power in peripheral areas can be attributed to local neuroanatomy. We further show that the effects on coherence can be explained by impacted signal-to-noise ratios. Together, our results show that general anesthesia has distinct local and global effects on neuronal processing in the fruit fly brain.
Ostrowski, Anja; Nordmeyer, Daniel; Boreham, Alexander; Holzhausen, Cornelia; Mundhenk, Lars; Graf, Christina; Meinke, Martina C; Vogt, Annika; Hadam, Sabrina; Lademann, Jürgen; Rühl, Eckart; Alexiev, Ulrike
2015-01-01
Summary The increasing interest and recent developments in nanotechnology pose previously unparalleled challenges in understanding the effects of nanoparticles on living tissues. Despite significant progress in in vitro cell and tissue culture technologies, observations on particle distribution and tissue responses in whole organisms are still indispensable. In addition to a thorough understanding of complex tissue responses which is the domain of expert pathologists, the localization of particles at their sites of interaction with living structures is essential to complete the picture. In this review we will describe and compare different imaging techniques for localizing inorganic as well as organic nanoparticles in tissues, cells and subcellular compartments. The visualization techniques include well-established methods, such as standard light, fluorescence, transmission electron and scanning electron microscopy as well as more recent developments, such as light and electron microscopic autoradiography, fluorescence lifetime imaging, spectral imaging and linear unmixing, superresolution structured illumination, Raman microspectroscopy and X-ray microscopy. Importantly, all methodologies described allow for the simultaneous visualization of nanoparticles and evaluation of cell and tissue changes that are of prime interest for toxicopathologic studies. However, the different approaches vary in terms of applicability for specific particles, sensitivity, optical resolution, technical requirements and thus availability, and effects of labeling on particle properties. Specific bottle necks of each technology are discussed in detail. Interpretation of particle localization data from any of these techniques should therefore respect their specific merits and limitations as no single approach combines all desired properties. PMID:25671170
Nyland, Jennifer F.; Bai, Jennifer J. K.; Katz, Howard E.; Silbergeld, Ellen K.
2009-01-01
Engineered nanoparticles (NPs) possess a range of biological activity. In vitro methods for assessing toxicity and efficacy would be enhanced by simultaneous quantitative information on the behavior of NPs in culture systems and signals of cell response. We have developed a method for visualizing NPs within cells using standard flow cytometric techniques and uniquely designed spherical siloxane NPs with an embedded (covalently bound) dansylamide dye. This method allowed NP visualization without obscuring detection of relevant biomarkers of cell subtype, activation state, and other events relevant to assessing bioactivity. We determined that NPs penetrated cells and induced a range of biological signals consistent with activation and costimulation. These results indicate that NPs may affect cell function at concentrations below those inducing cytotoxicity or apoptosis and demonstrate a novel method to image both localization of NPs and cell-level effects. PMID:19523425
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Dadaev, Tokhir; Leongamornlert, Daniel A; Saunders, Edward J; Eeles, Rosalind; Kote-Jarai, Zsofia
2016-03-15
: In this article, we present LocusExplorer, a data visualization and exploration tool for genetic association data. LocusExplorer is written in R using the Shiny library, providing access to powerful R-based functions through a simple user interface. LocusExplorer allows users to simultaneously display genetic, statistical and biological data for humans in a single image and allows dynamic zooming and customization of the plot features. Publication quality plots may then be produced in a variety of file formats. LocusExplorer is open source and runs through R and a web browser. It is available at www.oncogenetics.icr.ac.uk/LocusExplorer/ or can be installed locally and the source code accessed from https://github.com/oncogenetics/LocusExplorer tokhir.dadaev@icr.ac.uk. © The Author 2015. Published by Oxford University Press.
Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael
2013-01-16
One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.
Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.
Desantis, Andrea; Haggard, Patrick
2016-08-01
To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.
Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu
2018-05-01
Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.
Visualization of stress wave propagation via air-coupled acoustic emission sensors
NASA Astrophysics Data System (ADS)
Rivey, Joshua C.; Lee, Gil-Yong; Yang, Jinkyu; Kim, Youngkey; Kim, Sungchan
2017-02-01
We experimentally demonstrate the feasibility of visualizing stress waves propagating in plates using air-coupled acoustic emission sensors. Specifically, we employ a device that embeds arrays of microphones around an optical lens in a helical pattern. By implementing a beamforming technique, this remote sensing system allows us to record wave propagation events in situ via a single-shot and full-field measurement. This is a significant improvement over the conventional wave propagation tracking approaches based on laser doppler vibrometry or digital image correlation techniques. In this paper, we focus on demonstrating the feasibility and efficacy of this air-coupled acoustic emission technique by using large metallic plates exposed to external impacts. The visualization results of stress wave propagation will be shown under various impact scenarios. The proposed technique can be used to characterize and localize damage by detecting the attenuation, reflection, and scattering of stress waves that occurs at damage locations. This can ultimately lead to the development of new structural health monitoring and nondestructive evaluation methods for identifying hidden cracks or delaminations in metallic or composite plate structures, simultaneously negating the need for mounted contact sensors.
Classification of EEG abnormalities in partial epilepsy with simultaneous EEG-fMRI recordings.
Pedreira, C; Vaudano, A E; Thornton, R C; Chaudhary, U J; Vulliemoz, S; Laufs, H; Rodionov, R; Carmichael, D W; Lhatoo, S D; Guye, M; Quian Quiroga, R; Lemieux, L
2014-10-01
Scalp EEG recordings and the classification of interictal epileptiform discharges (IED) in patients with epilepsy provide valuable information about the epileptogenic network, particularly by defining the boundaries of the "irritative zone" (IZ), and hence are helpful during pre-surgical evaluation of patients with severe refractory epilepsies. The current detection and classification of epileptiform signals essentially rely on expert observers. This is a very time-consuming procedure, which also leads to inter-observer variability. Here, we propose a novel approach to automatically classify epileptic activity and show how this method provides critical and reliable information related to the IZ localization beyond the one provided by previous approaches. We applied Wave_clus, an automatic spike sorting algorithm, for the classification of IED visually identified from pre-surgical simultaneous Electroencephalogram-functional Magnetic Resonance Imagining (EEG-fMRI) recordings in 8 patients affected by refractory partial epilepsy candidate for surgery. For each patient, two fMRI analyses were performed: one based on the visual classification and one based on the algorithmic sorting. This novel approach successfully identified a total of 29 IED classes (compared to 26 for visual identification). The general concordance between methods was good, providing a full match of EEG patterns in 2 cases, additional EEG information in 2 other cases and, in general, covering EEG patterns of the same areas as expert classification in 7 of the 8 cases. Most notably, evaluation of the method with EEG-fMRI data analysis showed hemodynamic maps related to the majority of IED classes representing improved performance than the visual IED classification-based analysis (72% versus 50%). Furthermore, the IED-related BOLD changes revealed by using the algorithm were localized within the presumed IZ for a larger number of IED classes (9) in a greater number of patients than the expert classification (7 and 5, respectively). In contrast, in only one case presented the new algorithm resulted in fewer classes and activation areas. We propose that the use of automated spike sorting algorithms to classify IED provides an efficient tool for mapping IED-related fMRI changes and increases the EEG-fMRI clinical value for the pre-surgical assessment of patients with severe epilepsy. Copyright © 2014 Elsevier Inc. All rights reserved.
Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation
ERIC Educational Resources Information Center
Potter, Mary C.; Fox, Laura F.
2009-01-01
Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…
AM Herculis - Simultaneous X-ray, optical, and near-IR coverage
NASA Technical Reports Server (NTRS)
Szkody, P.; Tuohy, I. R.; Cordova, F. A.; Stockman, H. S.; Angel, J. R. P.; Wisniewski, W.
1980-01-01
A 6 hour X-ray pointing at AM Her using the HEAO 1 satellite is correlated with simultaneous broadband V and I photometry and visual circular polarimetry. The absence of correlations on either a flickering or an orbital time scale implies distinct regions for the visual and X-ray emission. Significant changes in the light curves are observed from one binary cycle to the next.
Simultaneous two-photon imaging and two-photon optogenetics of cortical circuits in three dimensions
Carrillo-Reid, Luis; Bando, Yuki; Peterka, Darcy S
2018-01-01
The simultaneous imaging and manipulating of neural activity could enable the functional dissection of neural circuits. Here we have combined two-photon optogenetics with simultaneous volumetric two-photon calcium imaging to measure and manipulate neural activity in mouse neocortex in vivo in three-dimensions (3D) with cellular resolution. Using a hybrid holographic approach, we simultaneously photostimulate more than 80 neurons over 150 μm in depth in layer 2/3 of the mouse visual cortex, while simultaneously imaging the activity of the surrounding neurons. We validate the usefulness of the method by photoactivating in 3D selected groups of interneurons, suppressing the response of nearby pyramidal neurons to visual stimuli in awake animals. Our all-optical approach could be used as a general platform to read and write neuronal activity. PMID:29412138
Multimap formation in visual cortex
Jain, Rishabh; Millin, Rachel; Mel, Bartlett W.
2015-01-01
An extrastriate visual area such as V2 or V4 contains neurons selective for a multitude of complex shapes, all sharing a common topographic organization. Simultaneously developing multiple interdigitated maps—hereafter a “multimap”—is challenging in that neurons must compete to generate a diversity of response types locally, while cooperating with their dispersed same-type neighbors to achieve uniform visual field coverage for their response type at all orientations, scales, etc. Previously proposed map development schemes have relied on smooth spatial interaction functions to establish both topography and columnar organization, but by locally homogenizing cells' response properties, local smoothing mechanisms effectively rule out multimap formation. We found in computer simulations that the key requirements for multimap development are that neurons are enabled for plasticity only within highly active regions of cortex designated “learning eligibility regions” (LERs), but within an LER, each cell's learning rate is determined only by its activity level with no dependence on location. We show that a hybrid developmental rule that combines spatial and activity-dependent learning criteria in this way successfully produces multimaps when the input stream contains multiple distinct feature types, or in the degenerate case of a single feature type, produces a V1-like map with “salt-and-pepper” structure. Our results support the hypothesis that cortical maps containing a fine mixture of different response types, whether in monkey extrastriate cortex, mouse V1 or elsewhere in the cortex, rather than signaling a breakdown of map formation mechanisms at the fine scale, are a product of a generic cortical developmental scheme designed to map cells with a diversity of response properties across a shared topographic space. PMID:26641946
Erdmann, Roman S; Toomre, Derek; Schepartz, Alanna
2017-01-01
Long time-lapse super-resolution imaging in live cells requires a labeling strategy that combines a bright, photostable fluorophore with a high-density localization probe. Lipids are ideal high-density localization probes, as they are >100 times more abundant than most membrane-bound proteins and simultaneously demark the boundaries of cellular organelles. Here, we describe Cer-SiR, a two-component, high-density lipid probe that is exceptionally photostable. Cer-SiR is generated in cells via a bioorthogonal reaction of two components: a ceramide lipid tagged with trans-cyclooctene (Cer-TCO) and a reactive, photostable Si-rhodamine dye (SiR-Tz). These components assemble within the Golgi apparatus of live cells to form Cer-SiR. Cer-SiR is benign to cellular function, localizes within the Golgi at a high density, and is sufficiently photostable to enable visualization of Golgi structure and dynamics by 3D confocal or long time-lapse STED microscopy.
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
NASA Astrophysics Data System (ADS)
Müller, M. S.; Urban, S.; Jutzi, B.
2017-08-01
The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.
Spontaneous Activity Drives Local Synaptic Plasticity In Vivo.
Winnubst, Johan; Cheyne, Juliette E; Niculescu, Dragos; Lohmann, Christian
2015-07-15
Spontaneous activity fine-tunes neuronal connections in the developing brain. To explore the underlying synaptic plasticity mechanisms, we monitored naturally occurring changes in spontaneous activity at individual synapses with whole-cell patch-clamp recordings and simultaneous calcium imaging in the mouse visual cortex in vivo. Analyzing activity changes across large populations of synapses revealed a simple and efficient local plasticity rule: synapses that exhibit low synchronicity with nearby neighbors (<12 μm) become depressed in their transmission frequency. Asynchronous electrical stimulation of individual synapses in hippocampal slices showed that this is due to a decrease in synaptic transmission efficiency. Accordingly, experimentally increasing local synchronicity, by stimulating synapses in response to spontaneous activity at neighboring synapses, stabilized synaptic transmission. Finally, blockade of the high-affinity proBDNF receptor p75(NTR) prevented the depression of asynchronously stimulated synapses. Thus, spontaneous activity drives local synaptic plasticity at individual synapses in an "out-of-sync, lose-your-link" fashion through proBDNF/p75(NTR) signaling to refine neuronal connectivity. VIDEO ABSTRACT. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Wang, Hao; Tomar, Vikas
2018-04-01
This work presents direct measurements of stress and temperature distribution during the mesoscale microstructural deformation of Inconel-617 (IN-617) during 3-point bending tests as a function of temperature. A novel nanomechanical Raman spectroscopy (NMRS)-based measurement platform was designed for simultaneous in situ temperature and stress mapping as a function of microstructure during deformation. The temperature distribution was found to be directly correlated to stress distribution for the analyzed microstructures. Stress concentration locations are shown to be directly related to higher heat conduction and result in microstructural hot spots with significant local temperature variation.
Splitting Attention across the Two Visual Fields in Visual Short-Term Memory
ERIC Educational Resources Information Center
Delvenne, Jean-Francois; Holt, Jessica L.
2012-01-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In…
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter
2012-01-01
Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
Shih, Arthur Chun-Chieh; Lee, DT; Peng, Chin-Lin; Wu, Yu-Wei
2007-01-01
Background When aligning several hundreds or thousands of sequences, such as epidemic virus sequences or homologous/orthologous sequences of some big gene families, to reconstruct the epidemiological history or their phylogenies, how to analyze and visualize the alignment results of many sequences has become a new challenge for computational biologists. Although there are several tools available for visualization of very long sequence alignments, few of them are applicable to the alignments of many sequences. Results A multiple-logo alignment visualization tool, called Phylo-mLogo, is presented in this paper. Phylo-mLogo calculates the variabilities and homogeneities of alignment sequences by base frequencies or entropies. Different from the traditional representations of sequence logos, Phylo-mLogo not only displays the global logo patterns of the whole alignment of multiple sequences, but also demonstrates their local homologous logos for each clade hierarchically. In addition, Phylo-mLogo also allows the user to focus only on the analysis of some important, structurally or functionally constrained sites in the alignment selected by the user or by built-in automatic calculation. Conclusion With Phylo-mLogo, the user can symbolically and hierarchically visualize hundreds of aligned sequences simultaneously and easily check the changes of their amino acid sites when analyzing many homologous/orthologous or influenza virus sequences. More information of Phylo-mLogo can be found at URL . PMID:17319966
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
The prevalence and risk factors of visual impairment among the elderly in Eastern Taiwan.
Wang, Wen-Li; Chen, Nancy; Sheu, Min-Muh; Wang, Jen-Hung; Hsu, Wen-Lin; Hu, Yih-Jin
2016-09-01
Visual impairment is associated with disability and poor quality of life. This study aimed to investigate the prevalence and associated risk factors of visual impairment among the suburban elderly in Eastern Taiwan. The cross-sectional research was conducted from April 2012 to August 2012. The ocular condition examination took place in suburban areas of Hualien County. Medical records from local infirmaries and questionnaires were utilized to collect demographic data and systemic disease status. Logistic regression models were used for the simultaneous analysis of the association between the prevalence of visual impairment and risk factors. Six hundred and eighty-one residents participated in this project. The mean age of the participants was 71.4±7.3 years. The prevalence of vision impairment (better eye<6/18) was 11.0%. Refractive error and cataract were the main causes of vision impairment. Logistic regression analysis showed that people aged 65-75 years had a 3.8 times higher risk of developing visual impairment (p=0.021), while the odds ratio of people aged > 75 years was 10.0 (p<0.001). In addition, patients with diabetic retinopathy had a 3.7 times higher risk of developing visual impairment (p=0.002), while the odds ratio of refractive error was 0.36 (p<0.001). The prevalence of visual impairment was relatively high compared with previous studies. Diabetic retinopathy was an important risk factor of visual impairment; by contrast, refractive error was beneficial to resist visual impairment. Therefore, regular screening of ocular condition and early intervention might aid in the prevention of avoidable vision loss. Copyright © 2016. Published by Elsevier Taiwan.
Picchioni, Dante; Schmidt, Kathleen C; McWhirter, Kelly K; Loutaev, Inna; Pavletic, Adriana J; Speer, Andrew M; Zametkin, Alan J; Miao, Ning; Bishu, Shrinivas; Turetsky, Kate M; Morrow, Anne S; Nadel, Jeffrey L; Evans, Brittney C; Vesselinovitch, Diana M; Sheeler, Carrie A; Balkin, Thomas J; Smith, Carolyn B
2018-05-15
If protein synthesis during sleep is required for sleep-dependent memory consolidation, we might expect rates of cerebral protein synthesis (rCPS) to increase during sleep in the local brain circuits that support performance on a particular task following training on that task. To measure circuit-specific brain protein synthesis during a daytime nap opportunity, we used the L-[1-(11)C]leucine positron emission tomography (PET) method with simultaneous polysomnography. We trained subjects on the visual texture discrimination task (TDT). This was followed by a nap opportunity during the PET scan, and we retested them later in the day after the scan. The TDT is considered retinotopically specific, so we hypothesized that higher rCPS in primary visual cortex would be observed in the trained hemisphere compared to the untrained hemisphere in subjects who were randomized to a sleep condition. Our results indicate that the changes in rCPS in primary visual cortex depended on whether subjects were in the wakefulness or sleep condition but were independent of the side of the visual field trained. That is, only in the subjects randomized to sleep, rCPS in the right primary visual cortex was higher than the left regardless of side trained. Other brain regions examined were not so affected. In the subjects who slept, performance on the TDT improved similarly regardless of the side trained. Results indicate a regionally selective and sleep-dependent effect that occurs with improved performance on the TDT.
Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.
Kok, Peter; de Lange, Floris P
2014-07-07
An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Vidal, Juan R.; Perrone-Bertolotti, Marcela; Kahane, Philippe; Lachaux, Jean-Philippe
2015-01-01
If conscious perception requires global information integration across active distant brain networks, how does the loss of conscious perception affect neural processing in these distant networks? Pioneering studies on perceptual suppression (PS) described specific local neural network responses in primary visual cortex, thalamus and lateral prefrontal cortex of the macaque brain. Yet the neural effects of PS have rarely been studied with intracerebral recordings outside these cortices and simultaneously across distant brain areas. Here, we combined (1) a novel experimental paradigm in which we produced a similar perceptual disappearance and also re-appearance by using visual adaptation with transient contrast changes, with (2) electrophysiological observations from human intracranial electrodes sampling wide brain areas. We focused on broadband high-frequency (50–150 Hz, i.e., gamma) and low-frequency (8–24 Hz) neural activity amplitude modulations related to target visibility and invisibility. We report that low-frequency amplitude modulations reflected stimulus visibility in a larger ensemble of recording sites as compared to broadband gamma responses, across distinct brain regions including occipital, temporal and frontal cortices. Moreover, the dynamics of the broadband gamma response distinguished stimulus visibility from stimulus invisibility earlier in anterior insula and inferior frontal gyrus than in temporal regions, suggesting a possible role of fronto-insular cortices in top–down processing for conscious perception. Finally, we report that in primary visual cortex only low-frequency amplitude modulations correlated directly with perceptual status. Interestingly, in this sensory area broadband gamma was not modulated during PS but became positively modulated after 300 ms when stimuli were rendered visible again, suggesting that local networks could be ignited by top–down influences during conscious perception. PMID:25642199
Simultaneous shape repulsion and global assimilation in the perception of aspect ratio
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Although local interactions involving orientation and spatial frequency are well understood, less is known about spatial interactions involving higher level pattern features. We examined interactive coding of aspect ratio, a prevalent two-dimensional feature. We measured perception of two simultaneously flashed ellipses by randomly post-cueing one of them and having observers indicate its aspect ratio. Aspect ratios interacted in two ways. One manifested as an aspect-ratio-repulsion effect. For example, when a slightly tall ellipse and a taller ellipse were simultaneously flashed, the less tall ellipse appeared flatter and the taller ellipse appeared even taller. This repulsive interaction was long range, occurring even when the ellipses were presented in different visual hemifields. The other interaction manifested as a global assimilation effect. An ellipse appeared taller when it was a part of a global vertical organization than when it was a part of a global horizontal organization. The repulsion and assimilation effects temporally dissociated as the former slightly strengthened, and the latter disappeared when the ellipse-to-mask stimulus onset asynchrony was increased from 40 to 140 ms. These results are consistent with the idea that shape perception emerges from rapid lateral and hierarchical neural interactions. PMID:21248223
Rocinante, a virtual collaborative visualizer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, M.J.; Ice, L.G.
1996-12-31
With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less
Topological visual mapping in robotics.
Romero, Anna; Cazorla, Miguel
2012-08-01
A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.
Bastos, Andre M; Briggs, Farran; Alitto, Henry J; Mangun, George R; Usrey, W Martin
2014-05-28
Oscillatory synchronization of neuronal activity has been proposed as a mechanism to modulate effective connectivity between interacting neuronal populations. In the visual system, oscillations in the gamma-frequency range (30-100 Hz) are thought to subserve corticocortical communication. To test whether a similar mechanism might influence subcortical-cortical communication, we recorded local field potential activity from retinotopically aligned regions in the lateral geniculate nucleus (LGN) and primary visual cortex (V1) of alert macaque monkeys viewing stimuli known to produce strong cortical gamma-band oscillations. As predicted, we found robust gamma-band power in V1. In contrast, visual stimulation did not evoke gamma-band activity in the LGN. Interestingly, an analysis of oscillatory phase synchronization of LGN and V1 activity identified synchronization in the alpha (8-14 Hz) and beta (15-30 Hz) frequency bands. Further analysis of directed connectivity revealed that alpha-band interactions mediated corticogeniculate feedback processing, whereas beta-band interactions mediated geniculocortical feedforward processing. These results demonstrate that although the LGN and V1 display functional interactions in the lower frequency bands, gamma-band activity in the alert monkey is largely an emergent property of cortex. Copyright © 2014 the authors 0270-6474/14/347639-06$15.00/0.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Global-local visual biases correspond with visual-spatial orientation.
Basso, Michael R; Lowery, Natasha
2004-02-01
Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.
Sun, Peng; Zhong, Liyun; Luo, Chunshu; Niu, Wenhu; Lu, Xiaoxu
2015-07-16
To perform the visual measurement of the evaporation process of a sessile droplet, a dual-channel simultaneous phase-shifting interferometry (DCSPSI) method is proposed. Based on polarization components to simultaneously generate a pair of orthogonal interferograms with the phase shifts of π/2, the real-time phase of a dynamic process can be retrieved with two-step phase-shifting algorithm. Using this proposed DCSPSI system, the transient mass (TM) of the evaporation process of a sessile droplet with different initial mass were presented through measuring the real-time 3D shape of a droplet. Moreover, the mass flux density (MFD) of the evaporating droplet and its regional distribution were also calculated and analyzed. The experimental results show that the proposed DCSPSI will supply a visual, accurate, noncontact, nondestructive, global tool for the real-time multi-parameter measurement of the droplet evaporation.
Fabrication of triple-labeled polyelectrolyte microcapsules for localized ratiometric pH sensing.
Song, Xiaoxue; Li, Huanbin; Tong, Weijun; Gao, Changyou
2014-02-15
Encapsulation of pH sensitive fluorophores as reporting molecules provides a powerful approach to visualize the transportation of multilayer capsules. In this study, two pH sensitive dyes (fluorescein and oregon green) and one pH insensitive dye (rhodamine B) were simultaneously labeled on the microcapsules to fabricate ratiometric pH sensors. The fluorescence of the triple-labeled microcapsule sensors was robust and nearly independent of other intracellular species. With a dynamic pH measurement range of 3.3-6.5, the microcapsules can report their localized pH at a real time. Cell culture experiments showed that the microcapsules could be internalized by RAW 246.7 cells naturally and finally accumulated in acidic organelles with a pH value of 5.08 ± 0.59 (mean ± s.d.; n=162). Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yazdanfar, Siavash; Kulkarni, Manish D.; Wong, Richard C. K.; Sivak, Michael J., Jr.; Willis, Joseph; Barton, Jennifer K.; Welch, Ashley J.; Izatt, Joseph A.
1998-04-01
A recently developed modality for blood flow measurement holds high promise in the management of bleeding ulcers. Color Doppler optical coherence tomography (CDOCT) uses low- coherence interferometry and digital signal processing to obtain precise localization of tissue microstructure simultaneous with bi-directional quantitation of blood flow. We discuss CDOCT as a diagnostic tool in the management of bleeding gastrointestinal lesions. Common treatments for bleeding ulcers include local injection of a vasoconstrictor, coagulation of blood via thermal contact or laser treatment, and necrosis of surrounding tissue with a sclerosant. We implemented these procedures in a rat dorsal skin flap model, and acquired CDOCT images before and after treatment. In these studies, CDOCT succeeded in identifying cessation of flow before it could be determined visually. Hence, we demonstrate the diagnostic capabilities of CDOCT in the regulation of bleeding in micron-scale vessels.
Vallianatou, Theodosia; Strittmatter, Nicole; Nilsson, Anna; Shariatgorji, Mohammadreza; Hamm, Gregory; Pereira, Marcela; Källback, Patrik; Svenningsson, Per; Karlgren, Maria; Goodwin, Richard J A; Andrén, Per E
2018-05-15
There is a high need to develop quantitative imaging methods capable of providing detailed brain localization information of several molecular species simultaneously. In addition, extensive information on the effect of the blood-brain barrier on the penetration, distribution and efficacy of neuroactive compounds is required. Thus, we have developed a mass spectrometry imaging method to visualize and quantify the brain distribution of drugs with varying blood-brain barrier permeability. With this approach, we were able to determine blood-brain barrier transport of different drugs and define the drug distribution in very small brain structures (e.g., choroid plexus) due to the high spatial resolution provided. Simultaneously, we investigated the effect of drug-drug interactions by inhibiting the membrane transporter multidrug resistance 1 protein. We propose that the described approach can serve as a valuable analytical tool during the development of neuroactive drugs, as it can provide physiologically relevant information often neglected by traditional imaging technologies. Copyright © 2018. Published by Elsevier Inc.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis
Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.
2016-01-01
Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815
Organic light emitting board for dynamic interactive display
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-01-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151
Organic light emitting board for dynamic interactive display
NASA Astrophysics Data System (ADS)
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-04-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.
47 CFR 80.293 - Check bearings by authorized ship personnel.
Code of Federal Regulations, 2010 CFR
2010-10-01
....293 Section 80.293 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... comparison of simultaneous visual and radio direction finder bearings. At least one comparison bearing must... visual bearing relative to the ship's heading and the difference between the visual and radio direction...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Yi; Cai, Zhonghou; Chen, Pice
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase seperated regions. The ability to simultanousely track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of- the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO 2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation ismore » initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO 2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, which is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO 2. Lastly, the direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.« less
Endophytic colonization of olive roots by the biocontrol strain Pseudomonas fluorescens PICF7.
Prieto, Pilar; Mercado-Blanco, Jesús
2008-05-01
Confocal microscopy combined with three-dimensional olive root tissue sectioning was used to provide evidence of the endophytic behaviour of Pseudomonas fluorescens PICF7, an effective biocontrol strain against Verticillium wilt of olive. Two derivatives of the green fluorescent protein (GFP), the enhanced green and the red fluorescent proteins, have been used to visualize simultaneously two differently fluorescently tagged populations of P. fluorescens PICF7 within olive root tissues at the single cell level. The time-course of colonization events of olive roots cv. Arbequina by strain PICF7 and the localization of tagged bacteria within olive root tissues are described. First, bacteria rapidly colonized root surfaces and were predominantly found in the differentiation zone. Thereafter, microscopy observations showed that PICF7-tagged populations eventually disappeared from the root surface, and increasingly colonized inner root tissues. Localized and limited endophytic colonization by the introduced bacteria was observed over time. Fluorescent-tagged bacteria were always visualized in the intercellular spaces of the cortex region, and no colonization of the root xylem vessels was detected at any time. To the best of our knowledge, this is the first time this approach has been used to demonstrate endophytism of a biocontrol Pseudomonas spp. strain in a woody host such as olive using a nongnotobiotic system.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
Seeing tones and hearing rectangles - Attending to simultaneous auditory and visual events
NASA Technical Reports Server (NTRS)
Casper, Patricia A.; Kantowitz, Barry H.
1985-01-01
The allocation of attention in dual-task situations depends on both the overall and the momentary demands associated with both tasks. Subjects in an inclusive- or reaction-time task responded to changes in simultaneous sequences of discrete auditory and visual stimuli. Performance on individual trials was affected by (1) the ratio of stimuli in the two tasks, (2) response demands of the two tasks, and (3) patterns inherent in the demands of one task.
Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina
2012-02-01
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.
Concurrent Vision Dysfunctions in Convergence Insufficiency with Traumatic Brain Injury
Alvarez, Tara L.; Kim, Eun H.; Vicci, Vincent R.; Dhar, Sunil K.; Biswal, Bharat B.; Barrett, A. M.
2012-01-01
Purpose This study assessed the prevalence of convergence insufficiency (CI) with and without simultaneous vision dysfunctions within the traumatic brain injury (TBI) sample population because although CI is commonly reported with TBI, the prevalence of concurrent visual dysfunctions with CI in TBI is unknown. Methods A retrospective analysis of 557 medical records from TBI civilian patients was conducted. Patients were all evaluated by a single optometrist. Visual acuity, oculomotor, binocular vision function, accommodation, visual fields, ocular health and vestibular function were assessed. Statistical comparisons between the CI and non-CI, as well as in-patient and out-patient subgroups, were conducted using chi-squared and Z-tests. Results Approximately 9% of the TBI sample had CI without the following simultaneous diagnoses: saccade or pursuit dysfunction; 3rd, 4th, or 6th nerve palsy; visual field deficit; visual spatial inattention/neglect; vestibular dysfunction or nystagmus. Photophobia with CI was observed in 16.3% (N=21/130) and vestibular dysfunction with CI was observed in 18.5% (N=24/130) of the CI subgroup. CI and cranial nerve palsies were common and yielded prevalence rates of 23.3% (N=130/557) and 26.9% (N=150/557), respectively, within the TBI sample. Accommodative dysfunction was common within the non-presbyopic TBI sample with a prevalence of 24.4% (N=76/314). Visual field deficits or unilateral visual spatial inattention/neglect were observed within 29.6% (N=80/270) of the TBI in-patient subgroup and were significantly more prevalent compared to the out-patient subgroup (p<0.001). Most TBI patients had visual acuities of 20/60 or better in the TBI sample (85%;N=473/557). Conclusions CI without simultaneous visual or vestibular dysfunctions was observed in about 9% of the visually symptomatic TBI civilian population studied. A thorough visual and vestibular examination is recommended for all TBI patients. PMID:23190716
Dual reporter transgene driven by 2.3Col1a1 promoter is active in differentiated osteoblasts
NASA Technical Reports Server (NTRS)
Marijanovic, Inga; Jiang, Xi; Kronenberg, Mark S.; Stover, Mary Louise; Erceg, Ivana; Lichtler, Alexander C.; Rowe, David W.
2003-01-01
AIM: As quantitative and spatial analyses of promoter reporter constructs are not easily performed in intact bone, we designed a reporter gene specific to bone, which could be analyzed both visually and quantitatively by using chloramphenicol acetyltransferase (CAT) and a cyan version of green fluorescent protein (GFPcyan), driven by a 2.3-kb fragment of the rat collagen promoter (Col2.3). METHODS: The construct Col2.3CATiresGFPcyan was used for generating transgenic mice. Quantitative measurement of promoter activity was performed by CAT analysis of different tissues derived from transgenic animals; localization was performed by visualized GFP in frozen bone sections. To assess transgene expression during in vitro differentiation, marrow stromal cell and neonatal calvarial osteoblast cultures were analyzed for CAT and GFP activity. RESULTS: In mice, CAT activity was detected in the calvaria, long bone, teeth, and tendon, whereas histology showed that GFP expression was limited to osteoblasts and osteocytes. In cell culture, increased activity of CAT correlated with increased differentiation, and GFP activity was restricted to mineralized nodules. CONCLUSION: The concept of a dual reporter allows a simultaneous visual and quantitative analysis of transgene activity in bone.
A pose estimation method for unmanned ground vehicles in GPS denied environments
NASA Astrophysics Data System (ADS)
Tamjidi, Amirhossein; Ye, Cang
2012-06-01
This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.
Noothalapati, Hemanth; Sasaki, Takahiro; Kaino, Tomohiro; Kawamukai, Makoto; Ando, Masahiro; Hamaguchi, Hiro-o; Yamamoto, Tatsuyuki
2016-01-01
Fungal cell walls are medically important since they represent a drug target site for antifungal medication. So far there is no method to directly visualize structurally similar cell wall components such as α-glucan, β-glucan and mannan with high specificity, especially in a label-free manner. In this study, we have developed a Raman spectroscopy based molecular imaging method and combined multivariate curve resolution analysis to enable detection and visualization of multiple polysaccharide components simultaneously at the single cell level. Our results show that vegetative cell and ascus walls are made up of both α- and β-glucans while spore wall is exclusively made of α-glucan. Co-localization studies reveal the absence of mannans in ascus wall but are distributed primarily in spores. Such detailed picture is believed to further enhance our understanding of the dynamic spore wall architecture, eventually leading to advancements in drug discovery and development in the near future. PMID:27278218
O'Reagan, Douglas; Fleming, Lee
2018-01-01
The "FinFET" design for transistors, developed at the University of California, Berkeley, in the 1990s, represented a major leap forward in the semiconductor industry. Understanding its origins and importance requires deep knowledge of local factors, such as the relationships among the lab's principal investigators, students, staff, and the institution. It also requires understanding this lab within the broader network of relationships that comprise the semiconductor industry-a much more difficult task using traditional historical methods, due to the paucity of sources on industrial research. This article is simultaneously 1) a history of an impactful technology and its social context, 2) an experiment in using data tools and visualizations as a complement to archival and oral history sources, to clarify and explore these "big picture" dimensions, and 3) an introduction to specific data visualization tools that we hope will be useful to historians of technology more generally.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Local unitary equivalence of quantum states and simultaneous orthogonal equivalence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Naihuan, E-mail: jing@ncsu.edu; Yang, Min; Zhao, Hui, E-mail: zhaohui@bjut.edu.cn
2016-06-15
The correspondence between local unitary equivalence of bipartite quantum states and simultaneous orthogonal equivalence is thoroughly investigated and strengthened. It is proved that local unitary equivalence can be studied through simultaneous similarity under projective orthogonal transformations, and four parametrization independent algorithms are proposed to judge when two density matrices on ℂ{sup d{sub 1}} ⊗ ℂ{sup d{sub 2}} are locally unitary equivalent in connection with trace identities, Kronecker pencils, Albert determinants and Smith normal forms.
ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
Modes of Power in Technical and Professional Visuals.
ERIC Educational Resources Information Center
Barton, Ben F.; Barton, Marthalee S.
1993-01-01
Treats visuals as sites of power inscription. Advances a Foucauldian design model based on the Panoptican--Jeremy Bentham's architectural figure for empowerment based on bimodal surveillance. Notes that numerous examples serve in demonstrating that maximum effectiveness results when visuals foster simultaneous viewing in the two panoptic modes,…
The Extraction of Information From Visual Persistence
ERIC Educational Resources Information Center
Erwin, Donald E.
1976-01-01
This research sought to distinguish among three concepts of visual persistence by substituting the physical presence of the target stimulus while simultaneously inhibiting the formation of a persisting representation. Reportability of information about the stimuli was compared to a condition in which visual persistence was allowed to fully develop…
Prado, Chloé; Dubois, Matthieu; Valdois, Sylviane
2007-09-01
The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request.
Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain
Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.
2016-01-01
Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754
Scribl: an HTML5 Canvas-based graphics library for visualizing genomic data over the web.
Miller, Chase A; Anthony, Jon; Meyer, Michelle M; Marth, Gabor
2013-02-01
High-throughput biological research requires simultaneous visualization as well as analysis of genomic data, e.g. read alignments, variant calls and genomic annotations. Traditionally, such integrative analysis required desktop applications operating on locally stored data. Many current terabyte-size datasets generated by large public consortia projects, however, are already only feasibly stored at specialist genome analysis centers. As even small laboratories can afford very large datasets, local storage and analysis are becoming increasingly limiting, and it is likely that most such datasets will soon be stored remotely, e.g. in the cloud. These developments will require web-based tools that enable users to access, analyze and view vast remotely stored data with a level of sophistication and interactivity that approximates desktop applications. As rapidly dropping cost enables researchers to collect data intended to answer questions in very specialized contexts, developers must also provide software libraries that empower users to implement customized data analyses and data views for their particular application. Such specialized, yet lightweight, applications would empower scientists to better answer specific biological questions than possible with general-purpose genome browsers currently available. Using recent advances in core web technologies (HTML5), we developed Scribl, a flexible genomic visualization library specifically targeting coordinate-based data such as genomic features, DNA sequence and genetic variants. Scribl simplifies the development of sophisticated web-based graphical tools that approach the dynamism and interactivity of desktop applications. Software is freely available online at http://chmille4.github.com/Scribl/ and is implemented in JavaScript with all modern browsers supported.
Moshirfar, Majid; Fenzl, Carlton R; Meyer, Jay J; Neuffer, Marcus C; Espandar, Ladan; Mifflin, Mark D
2011-02-01
To evaluate the safety, efficacy, and visual outcomes of simultaneous and sequential implantation of Intacs (Addition Technology, Inc, Sunnyvale, CA) and Verisyse phakic intraocular lens (AMO, Santa Ana, CA) in selected cases of ectatic corneal disease. John A. Moran Eye Center, University of Utah, UT. Prospective data were collected from 19 eyes of 12 patients (5 eyes, post-laser in situ keratomileusis ectasia and 14 eyes, keratoconus). Intacs segments were implanted followed by insertion of a phakic Verisyse lens at the same session (12 eyes) in the simultaneous group or several months later (7 eyes) in the sequential group. The uncorrected visual acuity, best spectacle-corrected visual acuity (BSCVA), and manifest refraction were recorded at each visit. No intraoperative or postoperative complications were observed. At the last follow-up (19 ± 6 months), in the simultaneous group, mean spherical error was -0.79 ± 1.0 diopter (D) (range, -2.0 to +1.50 D) and cylindrical error +2.06 ± 1.21 D (range, +0.5 to +3.75 D). In the sequential group, at the last follow-up, at 36 ± 21 months, the mean spherical error was -1.64 ± 1.31 D (range, -3.25 to +1.0 D) and cylindrical error +2.07 ± 1.03 D (range, +0.75 to +3.25 D). There were no significant differences in mean uncorrected visual acuity or BSCVA between the 2 groups preoperatively or postoperatively. No eye lost lines of preoperative BSCVA. Combined insertion of Intacs and Verisyse was safe and effective in all cases. The outcomes of the simultaneous implantation of the Intacs and Verisyse lens in 1 surgery were similar to the results achieved with sequential implantation using 2 surgeries.
Moshirfar, Majid; Bean, Andrew E; Albarracin, Julio C; Rebenitsch, Ronald L; Wallace, Ryan T; Birdsong, Orry C
2018-05-01
To report a retrospective study of simultaneous LASIK versus photorefractive keratectomy (PRK) with accompanying small-aperture cornea inlay implantation (KAMRA; AcuFocus, Inc., Irvine, CA) in treating presbyopia. Simultaneous LASIK/inlay and simultaneous PRK/inlay was performed on 79 and 47 patients, respectively. Follow-up examinations were conducted at 1, 3, and 6 months postoperatively. The main outcome measures were safety, efficacy, predictability, and stability with primary emphasis on monocular uncorrected near visual acuity (UNVA). Both groups met U.S. Food and Drug Administration criteria for efficacy with 95% and 55% of the LASIK/inlay group and 83% and 52% of the PRK/inlay group having a monocular UNVA of 20/40 (J5) and 20/25 (J2), respectively, at 6-month follow-up. Ninety-two percent of the LASIK/inlay group and 95% of the PRK/inlay group had a UDVA of 20/40 or better at 6 months. Two eyes lost one line of corrected distance visual acuity (CDVA). Mild hyperopic shift was noted in both groups at 6 months. Simultaneous PRK/inlay and LASIK/inlay meet the U.S. Food and Drug Administration standards for efficacy and safety based on 6-month preliminary results and have similar outcomes to emmetropic eyes. [J Refract Surg. 2018;34(5):310-315.]. Copyright 2018, SLACK Incorporated.
Edelman, Bradley J; Meng, Jianjun; Gulachek, Nicholas; Cline, Christopher C; He, Bin
2018-05-01
EEG-based brain-computer interface (BCI) technology creates non-biological pathways for conveying a user's mental intent solely through noninvasively measured neural signals. While optimizing the performance of a single task has long been the focus of BCI research, in order to translate this technology into everyday life, realistic situations, in which multiple tasks are performed simultaneously, must be investigated. In this paper, we explore the concept of cognitive flexibility, or multitasking, within the BCI framework by utilizing a 2-D cursor control task, using sensorimotor rhythms (SMRs), and a four-target visual attention task, using steady-state visual evoked potentials (SSVEPs), both individually and simultaneously. We found no significant difference between the accuracy of the tasks when executing them alone (SMR-57.9% ± 15.4% and SSVEP-59.0% ± 14.2%) and simultaneously (SMR-54.9% ± 17.2% and SSVEP-57.5% ± 15.4%). These modest decreases in performance were supported by similar, non-significant changes in the electrophysiology of the SSVEP and SMR signals. In this sense, we report that multiple BCI tasks can be performed simultaneously without a significant deterioration in performance; this finding will help drive these systems toward realistic daily use in which a user's cognition will need to be involved in multiple tasks at once.
Lee, Soomin; Uchiyama, Yuria; Shimomura, Yoshihiro; Katsuura, Tetsuo
2017-11-17
The simultaneous exposure to blue and green light was reported to result in less melatonin suppression than monochromatic exposure to blue or green light. Here, we conducted an experiment using extremely short blue- and green-pulsed light to examine their visual and nonvisual effects on visual evoked potentials (VEPs), pupillary constriction, electroretinograms (ERGs), and subjective evaluations. Twelve adult male subjects were exposed to three light conditions: blue-pulsed light (2.5-ms pulse width), green-pulsed light (2.5-ms pulse width), and simultaneous blue- and green-pulsed light with white background light. We measured the subject's pupil diameter three times in each condition. Then, after 10 min of rest, the subject was exposed to the same three light conditions. We measured the averaged ERG and VEP during 210 pulsed-light exposures in each condition. We also determined subjective evaluations using a visual analog scale (VAS) method. The pupillary constriction during the simultaneous exposure to blue- and green-pulsed light was significantly lower than that during the blue-pulsed light exposure despite the double irradiance intensity of the combination. We also found that the b/|a| wave of the ERGs during the simultaneous exposure to blue- and green-pulsed light was lower than that during the blue-pulsed light exposure. We confirmed the subadditive response to pulsed light on pupillary constriction and ERG. However, the P100 of the VEPs during the blue-pulsed light were smaller than those during the simultaneous blue- and green-pulsed light and green-pulsed light, indicating that the P100 amplitude might depend on the luminance of light. Our findings demonstrated the effect of the subadditive response to extremely short pulsed light on pupillary constriction and ERG responses. The effects on ipRGCs by the blue-pulsed light exposure are apparently reduced by the simultaneous irradiation of green light. The blue versus yellow (b/y) bipolar cells in the retina might be responsible for this phenomenon.
Peller, Michael; Willerding, Linus; Limmer, Simone; Hossann, Martin; Dietrich, Olaf; Ingrisch, Michael; Sroka, Ronald; Lindner, Lars H
2016-09-10
The efficacy of systemically applied, classical anti-cancer drugs is limited by insufficient selectivity to the tumor and the applicable dose is limited by side effects. Efficacy could be further improved by targeting of the drug to the tumor. Using thermosensitive liposomes (TSL) as a drug carrier, targeting is achieved by control of temperature in the target volume. In such an approach, effective local hyperthermia (40-43°C) (HT) of the tumor is considered essential but technically challenging. Thus, visualization of local heating and drug release using TSL is considered an important tool for further improvement. Visualization and feasibility of chemodosimetry by magnetic resonance imaging (MRI) has previously been demonstrated using TSL encapsulating both, contrast agent (CA) and doxorubicin (DOX) simultaneously in the same TSL. Dosimetry has been facilitated using T1-relaxation time change as a surrogate marker for DOX deposition in the tumor. To allow higher loading of the TSL and to simplify clinical development of new TSL formulations a new approach using a mixture of TSL either loaded with DOX or MRI-CA is suggested. This was successfully tested using phosphatidyldiglycerol-based TSL (DPPG2-TSL) in Brown Norway rats with syngeneic soft tissue sarcomas (BN175) implanted at both hind legs. After intravenous application of DOX-TSL and CA-TSL, heating of one tumor above 40°C for 1h using laser light resulted in highly selective DOX uptake. The DOX-concentration in the heated tumor tissue compared to the non-heated tumor showed an almost 10-fold increase. T1 and additional MRI surrogate parameters such as signal phase change were correlated to intratumoral DOX concentration. Visualization of DOX delivery in the sense of a chemodosimetry was demonstrated. Although phase-based MR-thermometry was affected by CA-TSL, phase information was found suitable for DOX concentration assessment. Local differences of DOX concentration in the tumors indicated the need for visualization of drug release for further improvement of targeting. Copyright © 2016 Elsevier B.V. All rights reserved.
Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2016-08-01
Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.
Ray-based acoustic localization of cavitation in a highly reverberant environment.
Chang, Natasha A; Dowling, David R
2009-05-01
Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.
The Cloud-Based Integrated Data Viewer (IDV)
NASA Astrophysics Data System (ADS)
Fisher, Ward
2015-04-01
Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While there are a suite of tools and methodologies used in traditional software engineering environments to mitigate this issue, they are typically ignored by developers lacking a background in software engineering. The result is a large body of software which is simultaneously critical and difficult to maintain. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. The advent of cloud computing has provided a solution to this problem, which was not previously practical on a large scale; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. Through application streaming we are able to bring the same visualization to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be. Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved. We will also discuss the differences between local software and software-as-a-service.
A Discussion of Assessment Needs in Manual Communication for Pre-College Students.
ERIC Educational Resources Information Center
Cokely, Dennis R.
The paper reviews issues in evaluating the manual communications skills of pre-college hearing impaired students, including testing of visual discrimination and visual memory, simultaneous communication, and attention span. (CL)
A visual-environment simulator with variable contrast
NASA Astrophysics Data System (ADS)
Gusarova, N. F.; Demin, A. V.; Polshchikov, G. V.
1987-01-01
A visual-environment simulator is proposed in which the image contrast can be varied continuously up to the reversal of the image. Contrast variability can be achieved by using two independently adjustable light sources to simultaneously illuminate the carrier of visual information (e.g., a slide or a cinematographic film). It is shown that such a scheme makes it possible to adequately model a complex visual environment.
The bandwidth of consolidation into visual short-term memory (VSTM) depends on the visual feature
Miller, James R.; Becker, Mark W.; Liu, Taosheng
2014-01-01
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for color and orientation resulted from methodological differences by testing the consolidation of color information with methods used in prior orientation experiments. We briefly presented two color patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target. Participants identified the target color via button-press (Experiment 1) or by clicking a location on a color wheel (Experiment 2). Although these methods have previously demonstrated that two orientations are consolidated in a strictly serial fashion, here we found equivalent performance in the sequential and simultaneous conditions, suggesting that two colors can be consolidated in parallel. To investigate whether this difference resulted from different consolidation mechanisms or a common mechanism with different features consuming different amounts of bandwidth, Experiment 3 presented a color patch and an oriented grating either sequentially or simultaneously. We found a lower performance in the simultaneous than the sequential condition, with orientation showing a larger impairment than color. These results suggest that consolidation of both features share common mechanisms. However, it seems that color requires less information to be encoded than orientation. As a result two colors can be consolidated in parallel without exceeding the bandwidth limit, whereas two orientations or an orientation and a color exceed the bandwidth and appear to be consolidated serially. PMID:25317065
Non-linear imaging techniques visualize the lipid profile of C. elegans
NASA Astrophysics Data System (ADS)
Mari, Meropi; Petanidou, Barbara; Palikaras, Konstantinos; Fotakis, Costas; Tavernarakis, Nektarios; Filippidis, George
2015-07-01
The non-linear techniques Second and Third Harmonic Generation (SHG, THG) have been employed simultaneously to record three dimensional (3D) imaging and localize the lipid content of the muscular areas (ectopic fat) of Caenorhabditis elegans (C. elegans). Simultaneously, Two-Photon Fluorescence (TPEF) was used initially to localize the stained lipids with Nile Red, but also to confirm the THG potential to image lipids successfully. In addition, GFP labelling of the somatic muscles, proves the initial suggestion of the existence of ectopic fat on the muscles and provides complementary information to the SHG imaging of the pharynx. The ectopic fat may be related to a complex of pathological conditions including type-2 diabetes, hypertension and cardiovascular diseases. The elucidation of the molecular path leading to the development of metabolic syndrome is a vital issue with high biological significance and necessitates accurate methods competent of monitoring lipid storage distribution and dynamics in vivo. THG microscopy was employed as a quantitative tool to monitor the lipid accumulation in non-adipose tissues in the pharyngeal muscles of 12 unstained specimens while the SHG imaging revealed the anatomical structure of the muscles. The ectopic fat accumulation on the pharyngeal muscles increases in wild type (N2) C. elegans between 1 and 9 days of adulthood. This suggests a correlation of the ectopic fat accumulation with the aging. Our results can provide new evidence relating the deposition of ectopic fat with aging, but also validate SHG and THG microscopy modalities as new, non-invasive tools capable of localizing and quantifying selectively lipid accumulation and distribution.
RAMTaB: Robust Alignment of Multi-Tag Bioimages
Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.
2012-01-01
Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510
Differential Distraction Effects in Short-Term and Long-Term Retention of Pictures and Words
ERIC Educational Resources Information Center
Pellegrino, James W.; And Others
1976-01-01
Comparisons between recall levels following simple acoustic or visual tasks and the simultaneous visual-plus-acoustic task are not based upon equivalent amounts of interference within each modality. This research attempts to test more precisely the relationship between visual and acoustic interference by using a sequential rather than a…
Differential Age Effects on Spatial and Visual Working Memory
ERIC Educational Resources Information Center
Oosterman, Joukje M.; Morel, Sascha; Meijer, Lisette; Buvens, Cleo; Kessels, Roy P. C.; Postma, Albert
2011-01-01
The present study was intended to compare age effects on visual and spatial working memory by using two versions of the same task that differed only in presentation mode. The working memory task contained both a simultaneous and a sequential presentation mode condition, reflecting, respectively, visual and spatial working memory processes. Young…
Presence of strong harmonics during visual entrainment: a magnetoencephalography study.
Heinrichs-Graham, Elizabeth; Wilson, Tony W
2012-09-01
Visual neurons are known to synchronize their firing with stimuli that flicker at a constant rate (e.g. 12Hz). These so-called visual steady-state responses (VSSR) are a well-studied phenomenon, yet the underlying mechanisms are widely disagreed upon. Furthermore, there is limited evidence that visual neurons may simultaneously synchronize at harmonics of the stimulation frequency. We utilized magnetoencephalography (MEG) to examine synchronization at harmonics of the visual stimulation frequency (18Hz). MEG data were analyzed for event-related-synchronization (ERS) at the fundamental frequency, 36, 54, and 72Hz. We found strong ERS in all bands. Only 31% of participants showed maximum entrainment at the fundamental; others showed stronger entrainment at either 36 or 54Hz. The cortical foci of these responses indicated that the harmonics involved cortices that were partially distinct from the fundamental. These findings suggest that spatially-overlapping subpopulations of neurons are simultaneously entrained at different harmonics of the stimulus frequency. Copyright © 2012 Elsevier B.V. All rights reserved.
Modulation of early cortical processing during divided attention to non-contiguous locations
Frey, Hans-Peter; Schmid, Anita M.; Murphy, Jeremy W.; Molholm, Sophie; Lalor, Edmund C.; Foxe, John J.
2015-01-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. While for several years the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed using high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classical pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing timeframes in hierarchically early visual regions and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. PMID:24606564
Annibale, Paolo; Gratton, Enrico
2015-01-01
Multi-cell biochemical assays and single cell fluorescence measurements revealed that the elongation rate of Polymerase II (PolII) in eukaryotes varies largely across different cell types and genes. However, there is not yet a consensus whether intrinsic factors such as the position, local mobility or the engagement by an active molecular mechanism of a genetic locus could be the determinants of the observed heterogeneity. Here by employing high-speed 3D fluorescence nanoimaging techniques we resolve and track at the single cell level multiple, distinct regions of mRNA synthesis within the model system of a large transgene array. We demonstrate that these regions are active transcription sites that release mRNA molecules in the nucleoplasm. Using fluctuation spectroscopy and the phasor analysis approach we were able to extract the local PolII elongation rate at each site as a function of time. We measured a four-fold variation in the average elongation between identical copies of the same gene measured simultaneously within the same cell, demonstrating a correlation between local transcription kinetics and the movement of the transcription site. Together these observations demonstrate that local factors, such as chromatin local mobility and the microenvironment of the transcription site, are an important source of transcription kinetics variability. PMID:25788248
Ultrafast Microscopy of Energy and Charge Transport
NASA Astrophysics Data System (ADS)
Huang, Libai
The frontier in solar energy research now lies in learning how to integrate functional entities across multiple length scales to create optimal devices. Advancing the field requires transformative experimental tools that probe energy transfer processes from the nano to the meso lengthscales. To address this challenge, we aim to understand multi-scale energy transport across both multiple length and time scales, coupling simultaneous high spatial, structural, and temporal resolution. In my talk, I will focus on our recent progress on visualization of exciton and charge transport in solar energy harvesting materials from the nano to mesoscale employing ultrafast optical nanoscopy. With approaches that combine spatial and temporal resolutions, we have recently revealed a new singlet-mediated triplet transport mechanism in certain singlet fission materials. This work demonstrates a new triplet exciton transport mechanism leading to favorable long-range triplet exciton diffusion on the picosecond and nanosecond timescales for solar cell applications. We have also performed a direct measurement of carrier transport in space and in time by mapping carrier density with simultaneous ultrafast time resolution and 50 nm spatial precision in perovskite thin films using transient absorption microscopy. These results directly visualize long-range carrier transport of 220nm in 2 ns for solution-processed polycrystalline CH3NH3PbI3 thin films. The spatially and temporally resolved measurements reported here underscore the importance of the local morphology and establish an important first step towards discerning the underlying transport properties of perovskite materials.
Takeuchi, Kazuhito; Nagatani, Tetsuya; Watanabe, Tadashi; Okumura, Eriko; Sato, Yusuke; Wakabayashi, Toshihiko
2015-01-01
A combined transsphenoidal-transcranial approach for the resection of pituitary adenomas has previously been reported. While this approach is useful for specific types of pituitary adenomas, it is an invasive technique. To reduce the invasiveness of this approach, we adopted the keyhole concept for pituitary adenoma resection. A 23-year-old man presented at a local hospital with a 6-month history of bilateral hemianopia. Magnetic resonance imaging revealed a large pituitary adenoma extending from the sella turcica toward the right frontal lobe. Endoscopic transsphenoidal surgery was planned at a local hospital; however, the operation was abandoned at the start of the resection because of the firm and fibrous nature of the tumor. The patient was subsequently referred to our hospital for additional surgery. The tumor was removed purely endoscopically via a transsphenoidal and transcranial route. Keyhole craniotomy, 3 cm in diameter, was performed, and a tubular retractor was used to achieve a wider surgical corridor; this enabled better visualization and dissection from the surrounding brain and provided enough room for the use of surgical instruments under endoscopic view. The tumor was successfully removed without complication. This is the first case report to describe the resection of a giant pituitary adenoma using a purely endoscopic and simultaneous transsphenoidal and transcranial keyhole approach. PMID:28663976
Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A
2014-08-01
The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.
NASA Astrophysics Data System (ADS)
Picazo-Bueno, José Ángel; Cojoc, Dan; Torre, Vincent; Micó, Vicente
2017-07-01
We present the combination of a single-shot water-immersion digital holographic microscopy with broadband illumination for simultaneous visualization of coherent and incoherent images using microbeads and different biosamples.
Neji, Radhouene; Phinikaridou, Alkystis; Whitaker, John; Botnar, René M.; Prieto, Claudia
2017-01-01
Purpose To develop a 3D whole‐heart Bright‐blood and black‐blOOd phase SensiTive (BOOST) inversion recovery sequence for simultaneous noncontrast enhanced coronary lumen and thrombus/hemorrhage visualization. Methods The proposed sequence alternates the acquisition of two bright‐blood datasets preceded by different preparatory pulses to obtain variations in blood/myocardium contrast, which then are combined in a phase‐sensitive inversion recovery (PSIR)‐like reconstruction to obtain a third, coregistered, black‐blood dataset. The bright‐blood datasets are used for both visualization of the coronary lumen and motion estimation, whereas the complementary black‐blood dataset potentially allows for thrombus/hemorrhage visualization. Furthermore, integration with 2D image‐based navigation enables 100% scan efficiency and predictable scan times. The proposed sequence was compared to conventional coronary MR angiography (CMRA) and PSIR sequences in a standardized phantom and in healthy subjects. Feasibility for thrombus depiction was tested ex vivo. Results With BOOST, the coronary lumen is visualized with significantly higher (P < 0.05) contrast‐to‐noise ratio and vessel sharpness when compared to conventional CMRA. Furthermore, BOOST showed effective blood signal suppression as well as feasibility for thrombus visualization ex vivo. Conclusion A new PSIR sequence for noncontrast enhanced simultaneous coronary lumen and thrombus/hemorrhage detection was developed. The sequence provided improved coronary lumen depiction and showed potential for thrombus visualization. Magn Reson Med 79:1460–1472, 2018. © 2017 International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28722267
Mucocele in an Onodi cell with simultaneous bilateral visual disturbance.
Fukuda, Yoichiro; Chikamatsu, Kazuaki; Ninomiya, Hiroshi; Yasuoka, Yoshihito; Miyashita, Motoaki; Furuya, Nobuhiko
2006-06-01
The Onodi cell is a large pneumatized posterior ethmoid cell and closely related to optic nerve. We present an extremely rare case of retrobulbar optic neuropathy caused by mucocele in an Onodi cell. A 79-year-old man complained of headaches and simultaneous bilateral visual disturbance. A computed tomography (CT) scan demonstrated a mucocele in an Onodi cell, which involved bilateral optic nerves. The surgical treatment with a transnasal endoscopic approach was performed, resulting in the improving of visual acuity. The bilateral optic nerves were identified along each lateral wall into an Onodi cell accompanied with bone defect. In an Onodi cell, even if the lesion is isolated and/or small, it may be closely related to ocular symptoms. Imaging studies should be considered for the differential diagnosis because early diagnosis and prompt surgical treatment for mucocele are needed for recovery of visual impairment.
Shih, Wenting; Yamada, Soichiro
2011-12-22
Traditionally, cell migration has been studied on two-dimensional, stiff plastic surfaces. However, during important biological processes such as wound healing, tissue regeneration, and cancer metastasis, cells must navigate through complex, three-dimensional extracellular tissue. To better understand the mechanisms behind these biological processes, it is important to examine the roles of the proteins responsible for driving cell migration. Here, we outline a protocol to study the mechanisms of cell migration using the epithelial cell line (MDCK), and a three-dimensional, fibrous, self-polymerizing matrix as a model system. This optically clear extracellular matrix is easily amenable to live-cell imaging studies and better mimics the physiological, soft tissue environment. This report demonstrates a technique for directly visualizing protein localization and dynamics, and deformation of the surrounding three-dimensional matrix. Examination of protein localization and dynamics during cellular processes provides key insight into protein functions. Genetically encoded fluorescent tags provide a unique method for observing protein localization and dynamics. Using this technique, we can analyze the subcellular accumulation of key, force-generating cytoskeletal components in real-time as the cell maneuvers through the matrix. In addition, using multiple fluorescent tags with different wavelengths, we can examine the localization of multiple proteins simultaneously, thus allowing us to test, for example, whether different proteins have similar or divergent roles. Furthermore, the dynamics of fluorescently tagged proteins can be quantified using Fluorescent Recovery After Photobleaching (FRAP) analysis. This measurement assays the protein mobility and how stably bound the proteins are to the cytoskeletal network. By combining live-cell imaging with the treatment of protein function inhibitors, we can examine in real-time the changes in the distribution of proteins and morphology of migrating cells. Furthermore, we also combine live-cell imaging with the use of fluorescent tracer particles embedded within the matrix to visualize the matrix deformation during cell migration. Thus, we can visualize how a migrating cell distributes force-generating proteins, and where the traction forces are exerted to the surrounding matrix. Through these techniques, we can gain valuable insight into the roles of specific proteins and their contributions to the mechanisms of cell migration.
Arrestin 1 and Cone Arrestin 4 Have Unique Roles in Visual Function in an All-Cone Mouse Retina.
Deming, Janise D; Pak, Joseph S; Shin, Jung-A; Brown, Bruce M; Kim, Moon K; Aung, Moe H; Lee, Eun-Jin; Pardue, Machelle T; Craft, Cheryl Mae
2015-12-01
Previous studies discovered cone phototransduction shutoff occurs normally for Arr1-/- and Arr4-/-; however, it is defective when both visual arrestins are simultaneously not expressed (Arr1-/-Arr4-/-). We investigated the roles of visual arrestins in an all-cone retina (Nrl-/-) since each arrestin has differential effects on visual function, including ARR1 for normal light adaptation, and ARR4 for normal contrast sensitivity and visual acuity. We examined Nrl-/-, Nrl-/-Arr1-/-, Nrl-/-Arr4-/-, and Nrl-/-Arr1-/-Arr4-/- mice with photopic electroretinography (ERG) to assess light adaptation and retinal responses, immunoblot and immunohistochemical localization analysis to measure retinal expression levels of M- and S-opsin, and optokinetic tracking (OKT) to measure the visual acuity and contrast sensitivity. Study results indicated that Nrl-/- and Nrl-/-Arr4-/- mice light adapted normally, while Nrl-/-Arr1-/- and Nrl-/-Arr1-/-Arr4-/- mice did not. Photopic ERG a-wave, b-wave, and flicker amplitudes followed a general pattern in which Nrl-/-Arr4-/- amplitudes were higher than the amplitudes of Nrl-/-, while the amplitudes of Nrl-/-Arr1-/- and Nrl-/-Arr1-/-Arr4-/- were lower. All three visual arrestin knockouts had faster implicit times than Nrl-/- mice. M-opsin expression is lower when ARR1 is not expressed, while S-opsin expression is lower when ARR4 is not expressed. Although M-opsin expression is mislocalized throughout the photoreceptor cells, S-opsin is confined to the outer segments in all genotypes. Contrast sensitivity is decreased when ARR4 is not expressed, while visual acuity was normal except in Nrl-/-Arr1-/-Arr4-/-. Based on the opposite visual phenotypes in an all-cone retina in the Nrl-/-Arr1-/- and Nrl-/-Arr4-/- mice, we conclude that ARR1 and ARR4 perform unique modulatory roles in cone photoreceptors.
Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen
2017-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663
Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen
2016-01-01
Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.
Simultaneous selection by object-based attention in visual and frontal cortex
Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.
2014-01-01
Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379
Visual Spatial Attention to Multiple Locations At Once: The Jury Is Still Out
ERIC Educational Resources Information Center
Jans, Bert; Peters, Judith C.; De Weerd, Peter
2010-01-01
Although in traditional attention research the focus of visual spatial attention has been considered as indivisible, many studies in the last 15 years have claimed the contrary. These studies suggest that humans can direct their attention simultaneously to multiple noncontiguous regions of the visual field upon mere instruction. The notion that…
ERIC Educational Resources Information Center
Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta
2012-01-01
This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…
Combaz, Adrien; Van Hulle, Marc M
2015-01-01
We study the feasibility of a hybrid Brain-Computer Interface (BCI) combining simultaneous visual oddball and Steady-State Visually Evoked Potential (SSVEP) paradigms, where both types of stimuli are superimposed on a computer screen. Potentially, such a combination could result in a system being able to operate faster than a purely P300-based BCI and encode more targets than a purely SSVEP-based BCI. We analyse the interactions between the brain responses of the two paradigms, and assess the possibility to detect simultaneously the brain activity evoked by both paradigms, in a series of 3 experiments where EEG data are analysed offline. Despite differences in the shape of the P300 response between pure oddball and hybrid condition, we observe that the classification accuracy of this P300 response is not affected by the SSVEP stimulation. We do not observe either any effect of the oddball stimulation on the power of the SSVEP response in the frequency of stimulation. Finally results from the last experiment show the possibility of detecting both types of brain responses simultaneously and suggest not only the feasibility of such hybrid BCI but also a gain over pure oddball- and pure SSVEP-based BCIs in terms of communication rate.
Beautiful Math, Part 5: Colorful Archimedean Tilings from Dynamical Systems.
Ouyang, Peichang; Zhao, Weiguo; Huang, Xuan
2015-01-01
The art of tiling originated very early in the history of civilization. Almost every known human society has made use of tilings in some form or another. In particular, tilings using only regular polygons have great visual appeal. Decorated regular tilings with continuous and symmetrical patterns were widely used in decoration field, such as mosaics, pavements, and brick walls. In science, these tilings provide inspiration for synthetic organic chemistry. Building on previous CG&A “Beautiful Math” articles, the authors propose an invariant mapping method to create colorful patterns on Archimedean tilings (1-uniform tilings). The resulting patterns simultaneously have global crystallographic symmetry and local cyclic or dihedral symmetry.
Cross-flow vortex structure and transition measurements using multi-element hot films
NASA Technical Reports Server (NTRS)
Agarwal, Naval K.; Mangalam, Siva M.; Maddalon, Dal V.; Collier, Fayette S., Jr.
1991-01-01
An experiment on a 45-degree swept wing was conducted to study three-dimensional boundary-layer characteristics using surface-mounted, micro-thin, multi-element hot-film sensors. Cross-flow vortex structure and boundary-layer transition were measured from the simultaneously acquired signals of the hot films. Spanwise variation of the root-mean-square (RMS) hot-film signal show a local minima and maxima. The distance between two minima corresponds to the stationary cross-flow vortex wavelength and agrees with naphthalene flow-visualization results. The chordwise and spanwise variation of amplified traveling (nonstationary) cross-flow disturbance characteristics were measured as Reynolds number was varied. The frequency of the most amplified cross-flow disturbances agrees with linear stability theory.
2D microwave imaging reflectometer electronics.
Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C
2014-11-01
A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.
A smoke generator system for aerodynamic flight research
NASA Technical Reports Server (NTRS)
Richwine, David M.; Curry, Robert E.; Tracy, Gene V.
1989-01-01
A smoke generator system was developed for in-flight vortex flow studies on the F-18 high alpha research vehicle (HARV). The development process included conceptual design, a survey of existing systems, component testing, detailed design, fabrication, and functional flight testing. Housed in the forebody of the aircraft, the final system consists of multiple pyrotechnic smoke cartridges which can be fired simultaneously or in sequence. The smoke produced is ducted to desired locations on the aircraft surface. The smoke generator system (SGS) has been used successfully to identify vortex core and core breakdown locations as functions of flight condition. Although developed for a specific vehicle, this concept may be useful for other aerodynamic flight research which requires the visualization of local flows.
Gentle, A R; Smith, G B
2014-10-20
Accurate solar and visual transmittances of materials in which surfaces or internal structures are complex are often not easily amenable to standard procedures with laboratory-based spectrophotometers and integrating spheres. Localized "hot spots" of intensity are common in such materials, so data on small samples is unreliable. A novel device and simple protocols have been developed and undergone validation testing. Simultaneous solar and visible transmittance and reflectance data have been acquired for skylight components and multilayer polycarbonate roof panels. The pyranometer and lux sensor setups also directly yield "light coolness" in lumens/watt. Sample areas must be large, and, although mainly in sheet form, some testing has been done on curved panels. The instrument, its operation, and the simple calculations used are described. Results on a subset of diffuse and partially diffuse materials with no hot spots have been cross checked using 150 mm integrating spheres with a spectrophotometer and the Air Mass 1.5 spectrum. Indications are that results are as good or better than with such spheres for transmittance, but reflectance techniques need refinement for some sample types.
Towards sub-nanometer real-space observation of spin and orbital magnetism at the Fe/MgO interface
Thersleff, Thomas; Muto, Shunsuke; Werwiński, Mirosław; Spiegelberg, Jakob; Kvashnin, Yaroslav; Hjӧrvarsson, Björgvin; Eriksson, Olle; Rusz, Ján; Leifer, Klaus
2017-01-01
While the performance of magnetic tunnel junctions based on metal/oxide interfaces is determined by hybridization, charge transfer, and magnetic properties at the interface, there are currently only limited experimental techniques with sufficient spatial resolution to directly observe these effects simultaneously in real-space. In this letter, we demonstrate an experimental method based on Electron Magnetic Circular Dichroism (EMCD) that will allow researchers to simultaneously map magnetic transitions and valency in real-space over interfacial cross-sections with sub-nanometer spatial resolution. We apply this method to an Fe/MgO bilayer system, observing a significant enhancement in the orbital to spin moment ratio that is strongly localized to the interfacial region. Through the use of first-principles calculations, multivariate statistical analysis, and Electron Energy-Loss Spectroscopy (EELS), we explore the extent to which this enhancement can be attributed to emergent magnetism due to structural confinement at the interface. We conclude that this method has the potential to directly visualize spin and orbital moments at buried interfaces in magnetic systems with unprecedented spatial resolution. PMID:28338011
Towards sub-nanometer real-space observation of spin and orbital magnetism at the Fe/MgO interface
NASA Astrophysics Data System (ADS)
Thersleff, Thomas; Muto, Shunsuke; Werwiński, Mirosław; Spiegelberg, Jakob; Kvashnin, Yaroslav; Hjӧrvarsson, Björgvin; Eriksson, Olle; Rusz, Ján; Leifer, Klaus
2017-03-01
While the performance of magnetic tunnel junctions based on metal/oxide interfaces is determined by hybridization, charge transfer, and magnetic properties at the interface, there are currently only limited experimental techniques with sufficient spatial resolution to directly observe these effects simultaneously in real-space. In this letter, we demonstrate an experimental method based on Electron Magnetic Circular Dichroism (EMCD) that will allow researchers to simultaneously map magnetic transitions and valency in real-space over interfacial cross-sections with sub-nanometer spatial resolution. We apply this method to an Fe/MgO bilayer system, observing a significant enhancement in the orbital to spin moment ratio that is strongly localized to the interfacial region. Through the use of first-principles calculations, multivariate statistical analysis, and Electron Energy-Loss Spectroscopy (EELS), we explore the extent to which this enhancement can be attributed to emergent magnetism due to structural confinement at the interface. We conclude that this method has the potential to directly visualize spin and orbital moments at buried interfaces in magnetic systems with unprecedented spatial resolution.
Storyline Visualizations of Eye Tracking of Movie Viewing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.
Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.
NASA Astrophysics Data System (ADS)
Jeong, Samuel; Ito, Yoshikazu; Edwards, Gary; Fujita, Jun-ichi
2018-06-01
The visualization of localized electronic charges on nanocatalysts is expected to yield fundamental information about catalytic reaction mechanisms. We have developed a high-sensitivity detection technique for the visualization of localized charges on a catalyst and their corresponding electric field distribution, using a low-energy beam of 1 to 5 keV electrons and a high-sensitivity scanning transmission electron microscope (STEM) detector. The highest sensitivity for visualizing a localized electric field was ∼0.08 V/µm at a distance of ∼17 µm from a localized charge at 1 keV of the primary electron energy, and a weak local electric field produced by 200 electrons accumulated on the carbon nanotube (CNT) apex can be visualized. We also observed that Au nanoparticles distributed on a CNT forest tended to accumulate a certain amount of charges, about 150 electrons, at a ‑2 V bias.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
NASA Technical Reports Server (NTRS)
Rashidnia, N.; Falco, R. E.
1987-01-01
A specially designed wind tunnel was used to examine the effects of tandemly arranged parallel plate manipulators (TAPPMs) on a turbulent boundary-layer structure and the associated drag. Momentum balances, as well as measurements of the local shear stress from the velocity gradient near the wall, were used to obtain the net drag and local skin friction changes. Two TAPPMs, identical except for the thickness of their plates, were used in the study. Results with .003 inch plates were a maximum net drag reduction of 10 percent at 58 beta sub o (using a momentum balance). At 20 beta sub o, simultaneous laser sheet flow visualization and hot-wire anemometry data showed that the Reynolds stress in the large eddies was significantly reduced, as were the streamwise and normal velocity components. Using space-time correlations the reductions were again identified. Furthermore, quantitative flow visualization showed that the outward normal velocity of the inner region was also significantly decreased in the region around 20 beta sub o. However, throughout the first 130 beta sub o, the measured sublayer thickness with the TAPPMs in place was 15 to 20 percent greater. The data showed that the skin friction, as well as the structure of the turbulence, was strongly modified in the first 35 beta sub o, but that they both significantly relaxed toward unmanipulated boundary layer values by 50 beta sub o.
Zhu, Yi; Cai, Zhonghou; Chen, Pice; ...
2016-02-26
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase seperated regions. The ability to simultanousely track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of- the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO 2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation ismore » initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO 2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, which is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO 2. Lastly, the direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.« less
Zhong, Yan; Xu, Xiao-Quan; Pan, Xiang-Long; Zhang, Wei; Xu, Hai; Yuan, Mei; Kong, Ling-Yan; Pu, Xue-Hui; Chen, Liang; Yu, Tong-Fu
2017-09-01
To evaluate the safety and efficacy of the hook wire system in the simultaneous localizations for multiple pulmonary nodules (PNs) before video-assisted thoracoscopic surgery (VATS), and to clarify the risk factors for pneumothorax associated with the localization procedure. Between January 2010 and February 2016, 67 patients (147 nodules, Group A) underwent simultaneous localizations for multiple PNs using a hook wire system. The demographic, localization procedure-related information and the occurrence rate of pneumothorax were assessed and compared with a control group (349 patients, 349 nodules, Group B). Multivariate logistic regression analyses were used to determine the risk factors for pneumothorax during the localization procedure. All the 147 nodules were successfully localized. Four (2.7%) hook wires dislodged before VATS procedure, but all these four lesions were successfully resected according to the insertion route of hook wire. Pathological diagnoses were acquired for all 147 nodules. Compared with Group B, Group A demonstrated significantly longer procedure time (p < 0.001) and higher occurrence rate of pneumothorax (p = 0.019). Multivariate logistic regression analysis indicated that position change during localization procedure (OR 2.675, p = 0.021) and the nodules located in the ipsilateral lung (OR 9.404, p < 0.001) were independent risk factors for pneumothorax. Simultaneous localizations for multiple PNs using a hook wire system before VATS procedure were safe and effective. Compared with localization for single PN, simultaneous localizations for multiple PNs were prone to the occurrence of pneumothorax. Position change during localization procedure and the nodules located in the ipsilateral lung were independent risk factors for pneumothorax.
Cellular-level surgery using nano robots.
Song, Bo; Yang, Ruiguo; Xi, Ning; Patterson, Kevin Charles; Qu, Chengeng; Lai, King Wai Chiu
2012-12-01
The atomic force microscope (AFM) is a popular instrument for studying the nano world. AFM is naturally suitable for imaging living samples and measuring mechanical properties. In this article, we propose a new concept of an AFM-based nano robot that can be applied for cellular-level surgery on living samples. The nano robot has multiple functions of imaging, manipulation, characterizing mechanical properties, and tracking. In addition, the technique of tip functionalization allows the nano robot the ability for precisely delivering a drug locally. Therefore, the nano robot can be used for conducting complicated nano surgery on living samples, such as cells and bacteria. Moreover, to provide a user-friendly interface, the software in this nano robot provides a "videolized" visual feedback for monitoring the dynamic changes on the sample surface. Both the operation of nano surgery and observation of the surgery results can be simultaneously achieved. This nano robot can be easily integrated with extra modules that have the potential applications of characterizing other properties of samples such as local conductance and capacitance.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Wansard, Murielle; Bartolomeo, Paolo; Bastin, Christine; Segovia, Fermín; Gillet, Sophie; Duret, Christophe; Meulemans, Thierry
2015-01-01
Over the last decade, many studies have demonstrated that visuospatial working memory (VSWM) can be divided into separate subsystems dedicated to the retention of visual patterns and their serial order. Impaired VSWM has been suggested to exacerbate left visual neglect in right-brain-damaged individuals. The aim of this study was to investigate the segregation between spatial-sequential and spatial-simultaneous working memory in individuals with neglect. We demonstrated that patterns of results on these VSWM tasks can be dissociated. Spatial-simultaneous and sequential aspects of VSWM can be selectively impaired in unilateral neglect. Our results support the hypothesis of multiple VSWM subsystems, which should be taken into account to better understand neglect-related deficits.
Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah
2014-11-19
The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.
Doi, Hirokazu; Shinohara, Kazuyuki
2015-03-01
Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visuomotor Transformations Underlying Hunting Behavior in Zebrafish
Bianco, Isaac H.; Engert, Florian
2015-01-01
Summary Visuomotor circuits filter visual information and determine whether or not to engage downstream motor modules to produce behavioral outputs. However, the circuit mechanisms that mediate and link perception of salient stimuli to execution of an adaptive response are poorly understood. We combined a virtual hunting assay for tethered larval zebrafish with two-photon functional calcium imaging to simultaneously monitor neuronal activity in the optic tectum during naturalistic behavior. Hunting responses showed mixed selectivity for combinations of visual features, specifically stimulus size, speed, and contrast polarity. We identified a subset of tectal neurons with similar highly selective tuning, which show non-linear mixed selectivity for visual features and are likely to mediate the perceptual recognition of prey. By comparing neural dynamics in the optic tectum during response versus non-response trials, we discovered premotor population activity that specifically preceded initiation of hunting behavior and exhibited anatomical localization that correlated with motor variables. In summary, the optic tectum contains non-linear mixed selectivity neurons that are likely to mediate reliable detection of ethologically relevant sensory stimuli. Recruitment of small tectal assemblies appears to link perception to action by providing the premotor commands that release hunting responses. These findings allow us to propose a model circuit for the visuomotor transformations underlying a natural behavior. PMID:25754638
Visuomotor transformations underlying hunting behavior in zebrafish.
Bianco, Isaac H; Engert, Florian
2015-03-30
Visuomotor circuits filter visual information and determine whether or not to engage downstream motor modules to produce behavioral outputs. However, the circuit mechanisms that mediate and link perception of salient stimuli to execution of an adaptive response are poorly understood. We combined a virtual hunting assay for tethered larval zebrafish with two-photon functional calcium imaging to simultaneously monitor neuronal activity in the optic tectum during naturalistic behavior. Hunting responses showed mixed selectivity for combinations of visual features, specifically stimulus size, speed, and contrast polarity. We identified a subset of tectal neurons with similar highly selective tuning, which show non-linear mixed selectivity for visual features and are likely to mediate the perceptual recognition of prey. By comparing neural dynamics in the optic tectum during response versus non-response trials, we discovered premotor population activity that specifically preceded initiation of hunting behavior and exhibited anatomical localization that correlated with motor variables. In summary, the optic tectum contains non-linear mixed selectivity neurons that are likely to mediate reliable detection of ethologically relevant sensory stimuli. Recruitment of small tectal assemblies appears to link perception to action by providing the premotor commands that release hunting responses. These findings allow us to propose a model circuit for the visuomotor transformations underlying a natural behavior. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ulhassan, Waqar; von Thiele Schwarz, Ulrica; Westerlund, Hugo; Sandahl, Christer; Thor, Johan
2015-01-01
Visual management (VM) tools such as whiteboards, often employed in Lean thinking applications, are intended to be helpful in improving work processes in different industries including health care. It remains unclear, however, how VM is actually applied in health care Lean interventions and how it might influence the clinical staff. We therefore examined how Lean-inspired VM using whiteboards for continuous improvement efforts related to the hospital staff's work and collaboration. Within a case study design, we combined semistructured interviews, nonparticipant observations, and photography on 2 cardiology wards. The fate of VM differed between the 2 wards; in one, it was well received by the staff and enhanced continuous improvement efforts, whereas in the other ward, it was not perceived to fit in the work flow or to make enough sense in order to be sustained. Visual management may enable the staff and managers to allow communication across time and facilitate teamwork by enabling the inclusion of team members who are not present simultaneously; however, its adoption and value seem contingent on finding a good fit with the local context. A combination of continuous improvement and VM may be helpful in keeping the staff engaged in the change process in the long run.
[Occipital neuralgia with visual obscurations: a case report].
Selekler, Hamit Macit; Dündar, Gülmine; Kutlu, Ayşe
2010-07-01
Vertigo, dizziness and visual blurring have been reported in painful conditions in trigeminal innervation zones such as in idiopathic stabbing headache, supraorbital neuralgia or trigeminal nerve ophthalmic branch neuralgia. Although not common, pain in occipital neuralgia can spread through the anterior parts of the head. In this article, we present a case whose occipital neuralgiform paroxysms spread to the ipsilateral eye with simultaneous visual obscuration; the mechanisms of propagation and visual obscuration are discussed.
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...
Visualizing Cross-sectional Data in a Real-World Context
NASA Astrophysics Data System (ADS)
Van Noten, K.; Lecocq, T.
2016-12-01
If you could fly around your research results in three dimensions, wouldn't you like to do it? Visualizing research results properly during scientific presentations already does half the job of informing the public on the geographic framework of your research. Many scientists use the Google Earth™ mapping service (V7.1.2.2041) because it's a great interactive mapping tool for assigning geographic coordinates to individual data points, localizing a research area, and draping maps of results over Earth's surface for 3D visualization. However, visualizations of research results in vertical cross-sections are often not shown simultaneously with the maps in Google Earth. A few tutorials and programs to display cross-sectional data in Google Earth do exist, and the workflow is rather simple. By importing a cross-sectional figure into in the open software SketchUp Make [Trimble Navigation Limited, 2016], any spatial model can be exported to a vertical figure in Google Earth. In this presentation a clear workflow/tutorial is presented how to image cross-sections manually in Google Earth. No software skills, nor any programming codes are required. It is very easy to use, offers great possibilities for teaching and allows fast figure manipulation in Google Earth. The full workflow can be found in "Van Noten, K. 2016. Visualizing Cross-Sectional Data in a Real-World Context. EOS, Transactions AGU, 97, 16-19".The video tutorial can be found here: https://www.youtube.com/watch?v=Tr8LwFJ4RYU&Figure: Cross-sectional Research Examples Illustrated in Google Earth
Modulation of early cortical processing during divided attention to non-contiguous locations.
Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J
2014-05-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Scribl: an HTML5 Canvas-based graphics library for visualizing genomic data over the web
Miller, Chase A.; Anthony, Jon; Meyer, Michelle M.; Marth, Gabor
2013-01-01
Motivation: High-throughput biological research requires simultaneous visualization as well as analysis of genomic data, e.g. read alignments, variant calls and genomic annotations. Traditionally, such integrative analysis required desktop applications operating on locally stored data. Many current terabyte-size datasets generated by large public consortia projects, however, are already only feasibly stored at specialist genome analysis centers. As even small laboratories can afford very large datasets, local storage and analysis are becoming increasingly limiting, and it is likely that most such datasets will soon be stored remotely, e.g. in the cloud. These developments will require web-based tools that enable users to access, analyze and view vast remotely stored data with a level of sophistication and interactivity that approximates desktop applications. As rapidly dropping cost enables researchers to collect data intended to answer questions in very specialized contexts, developers must also provide software libraries that empower users to implement customized data analyses and data views for their particular application. Such specialized, yet lightweight, applications would empower scientists to better answer specific biological questions than possible with general-purpose genome browsers currently available. Results: Using recent advances in core web technologies (HTML5), we developed Scribl, a flexible genomic visualization library specifically targeting coordinate-based data such as genomic features, DNA sequence and genetic variants. Scribl simplifies the development of sophisticated web-based graphical tools that approach the dynamism and interactivity of desktop applications. Availability and implementation: Software is freely available online at http://chmille4.github.com/Scribl/ and is implemented in JavaScript with all modern browsers supported. Contact: gabor.marth@bc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23172864
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Treangen, Todd J; Ondov, Brian D; Koren, Sergey; Phillippy, Adam M
2014-01-01
Whole-genome sequences are now available for many microbial species and clades, however existing whole-genome alignment methods are limited in their ability to perform sequence comparisons of multiple sequences simultaneously. Here we present the Harvest suite of core-genome alignment and visualization tools for the rapid and simultaneous analysis of thousands of intraspecific microbial strains. Harvest includes Parsnp, a fast core-genome multi-aligner, and Gingr, a dynamic visual platform. Together they provide interactive core-genome alignments, variant calls, recombination detection, and phylogenetic trees. Using simulated and real data we demonstrate that our approach exhibits unrivaled speed while maintaining the accuracy of existing methods. The Harvest suite is open-source and freely available from: http://github.com/marbl/harvest.
States of Awareness I: Subliminal Perception Relationship to Situational Awareness
1993-05-01
one experiment, the visual detection threshold was raised by simultaneous auditory stimulation involving subliminal emotional words. Similar results...an assessment was made of the effects of both subliminal and supraliminal auditory accessory stimulation (white noise) on a visual detection task... stimulation investigation. Both subliminal and supraliminal auditory stimulation were employed to evaluate possible differential effects in visual illusions
Ran, Xiang; Wang, Zhenzhen; Zhang, Zhijun; Pu, Fang; Ren, Jinsong; Qu, Xiaogang
2016-01-11
We display a nucleic acid controlled AgNC platform for latent fingerprint visualization. The versatile emission of aptamer-modified AgNCs was regulated by the nearby DNA regions. Multi-color images for simultaneous visualization of fingerprints and exogenous components were successfully obtained. A quantitative detection strategy for exogenous substances in fingerprints was also established.
Sequential then Interactive Processing of Letters and Words in the Left Fusiform Gyrus
Thesen, Thomas; McDonald, Carrie R.; Carlson, Chad; Doyle, Werner; Cash, Syd; Sherfey, Jason; Felsovalyi, Olga; Girard, Holly; Barr, William; Devinsky, Orrin; Kuzniecky, Ruben; Halgren, Eric
2013-01-01
Despite decades of cognitive, neuropsychological, and neuroimaging studies, it is unclear if letters are identified prior to word-form encoding during reading, or if letters and their combinations are encoded simultaneously and interactively. Here, using functional magnetic resonance imaging, we show that a ‘letter-form’ area (responding more to consonant strings than false fonts) can be distinguished from an immediately anterior ‘visual word-form area’ in ventral occipitotemporal cortex (responding more to words than consonant strings). Letter-selective magnetoencephalographic responses begin in the letter-form area ~60ms earlier than word-selective responses in the word-form area. Local field potentials confirm the latency and location of letter-selective responses. This area shows increased high gamma power for ~400ms, and strong phase-locking with more anterior areas supporting lexico-semantic processing. These findings suggest that during reading, visual stimuli are first encoded as letters before their combinations are encoded as words. Activity then rapidly spreads anteriorly, and the entire network is engaged in sustained integrative processing. PMID:23250414
DSPCP: A Data Scalable Approach for Identifying Relationships in Parallel Coordinates.
Nguyen, Hoa; Rosen, Paul
2018-03-01
Parallel coordinates plots (PCPs) are a well-studied technique for exploring multi-attribute datasets. In many situations, users find them a flexible method to analyze and interact with data. Unfortunately, using PCPs becomes challenging as the number of data items grows large or multiple trends within the data mix in the visualization. The resulting overdraw can obscure important features. A number of modifications to PCPs have been proposed, including using color, opacity, smooth curves, frequency, density, and animation to mitigate this problem. However, these modified PCPs tend to have their own limitations in the kinds of relationships they emphasize. We propose a new data scalable design for representing and exploring data relationships in PCPs. The approach exploits the point/line duality property of PCPs and a local linear assumption of data to extract and represent relationship summarizations. This approach simultaneously shows relationships in the data and the consistency of those relationships. Our approach supports various visualization tasks, including mixed linear and nonlinear pattern identification, noise detection, and outlier detection, all in large data. We demonstrate these tasks on multiple synthetic and real-world datasets.
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Yu, Tianbao; Wang, Zhong; Liu, Wenxing; Wang, Tongbiao; Liu, Nianhua; Liao, Qinghua
2016-04-18
We report numerically large and complete photonic and phononic band gaps that simultaneously exist in eight-fold phoxonic quasicrystals (PhXQCs). PhXQCs can possess simultaneous photonic and phononic band gaps over a wide range of geometric parameters. Abundant localized modes can be achieved in defect-free PhXQCs for all photonic and phononic polarizations. These defect-free localized modes exhibit multiform spatial distributions and can confine simultaneously electromagnetic and elastic waves in a large area, thereby providing rich selectivity and enlarging the interaction space of optical and elastic waves. The simulated results based on finite element method show that quasiperiodic structures formed of both solid rods in air and holes in solid materials can simultaneously confine and tailor electromagnetic and elastic waves; these structures showed advantages over the periodic counterparts.
Complex Functions with GeoGebra
ERIC Educational Resources Information Center
Breda, Ana Maria D'azevedo; Dos Santos, José Manuel Dos Santos
2016-01-01
Complex functions, generally feature some interesting peculiarities, seen as extensions of real functions. The visualization of complex functions properties usually requires the simultaneous visualization of two-dimensional spaces. The multiple Windows of GeoGebra, combined with its ability of algebraic computation with complex numbers, allow the…
Visual arts training is linked to flexible attention to local and global levels of visual stimuli.
Chamberlain, Rebecca; Wagemans, Johan
2015-10-01
Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.
Infants' Visual Localization of Visual and Auditory Targets.
ERIC Educational Resources Information Center
Bechtold, A. Gordon; And Others
This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…
Combination of structured illumination and single molecule localization microscopy in one setup
NASA Astrophysics Data System (ADS)
Rossberger, Sabrina; Best, Gerrit; Baddeley, David; Heintzmann, Rainer; Birk, Udo; Dithmar, Stefan; Cremer, Christoph
2013-09-01
Understanding the positional and structural aspects of biological nanostructures simultaneously is as much a challenge as a desideratum. In recent years, highly accurate (20 nm) positional information of optically isolated targets down to the nanometer range has been obtained using single molecule localization microscopy (SMLM), while highly resolved (100 nm) spatial information has been achieved using structured illumination microscopy (SIM). In this paper, we present a high-resolution fluorescence microscope setup which combines the advantages of SMLM with SIM in order to provide high-precision localization and structural information in a single setup. Furthermore, the combination of the wide-field SIM image with the SMLM data allows us to identify artifacts produced during the visualization process of SMLM data, and potentially also during the reconstruction process of SIM images. We describe the SMLM-SIM combo and software, and apply the instrument in a first proof-of-principle to the same region of H3K293 cells to achieve SIM images with high structural resolution (in the 100 nm range) in overlay with the highly accurate position information of localized single fluorophores. Thus, with its robust control software, efficient switching between the SMLM and SIM mode, fully automated and user-friendly acquisition and evaluation software, the SMLM-SIM combo is superior over existing solutions.
A New Micro-holder Device for Local Drug Delivery during In Vivo Whole-cell Recordings.
Sáez, María; Ketzef, Maya; Alegre-Cortés, Javier; Reig, Ramón; Silberberg, Gilad
2018-06-15
Focal administration of pharmacological agents during in vivo recordings is a useful technique to study the functional properties of neural microcircuits. However, the lack of visual control makes this task difficult and inaccurate, especially when targeting small and deep regions where spillover to neighboring regions is likely to occur. An additional problem with recording stability arises when combining focal drug administration with in vivo intracellular recordings, which are highly sensitive to mechanical vibrations. To address these technical issues, we designed a micro-holder that enables accurate local application of pharmacological agents during in vivo whole-cell recordings. The holder couples the recording and drug delivery pipettes with adjustable distance between the respective tips adapted to the experimental needs. To test the efficacy of the micro-holder we first performed whole-cell recordings in mouse primary somatosensory cortex (S1) with simultaneous extracellular recordings in S1 and motor cortex (M1), before and after local application of bicuculline methiodide (BMI 200 µM). The blockade of synaptic inhibition resulted in increased amplitudes and rising slopes of "Up states", and shortening of their duration. We then checked the usability of the micro-holder in a deeper brain structure, the striatum. We applied tetrodotoxin (TTX 10 µM) during whole-cell recordings in the striatum, while simultaneously obtaining extracellular recordings in S1 and M1. The focal application of TTX in the striatum blocked Up states in the recorded striatal neurons, without affecting the cortical activity. We also describe two different approaches for precisely releasing the drugs without unwanted leakage along the pipette approach trajectory. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Attentional Episodes in Visual Perception
ERIC Educational Resources Information Center
Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark
2011-01-01
Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…
Active microrheology and simultaneous visualization of sheared phospholipid monolayers
Choi, S.Q.; Steltenkamp, S.; Zasadzinski, J.A.; Squires, T.M.
2011-01-01
Two-dimensional films of surface-active agents—from phospholipids and proteins to nanoparticles and colloids—stabilize fluid interfaces, which are essential to the science, technology and engineering of everyday life. The 2D nature of interfaces present unique challenges and opportunities: coupling between the 2D films and the bulk fluids complicates the measurement of surface dynamic properties, but allows the interfacial microstructure to be directly visualized during deformation. Here we present a novel technique that combines active microrheology with fluorescence microscopy to visualize fluid interfaces as they deform under applied stress, allowing structure and rheology to be correlated on the micron-scale in monolayer films. We show that even simple, single-component lipid monolayers can exhibit viscoelasticity, history dependence, a yield stress and hours-long time scales for elastic recoil and aging. Simultaneous visualization of the monolayer under stress shows that the rich dynamical response results from the cooperative dynamics and deformation of liquid-crystalline domains and their boundaries. PMID:21587229
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Buxton, Eric C; De Muth, James E
2013-01-01
Constraints in geography and time require cost efficiencies in professional development for pharmacists. Distance learning, with its growing availability and lower intrinsic costs, will likely become more prevalent. The objective of this nonexperimental, postintervention study was to examine the perceptions of pharmacists attending a continuing education program. One group participated in the live presentation, whereas the second group joined via a simultaneous webcast. After the presentation, both groups were surveyed with identical questions concerning their perceptions of their learning environment, course content, and utility to their work. Comparisons across group responses to the summated scales were conducted through the use of Kruskal-Wallis tests. Analysis of the data showed that both the distance and local groups were demographically similar and that both groups were satisfied with the presentation method, audio and visual quality, and both felt that they would be able to apply what they learned in their practice. However, the local group was significantly more satisfied with the learning experience. Distance learning does provide a viable and more flexible method for pharmacy professional development, but does not yet replace the traditional learning environment in all facets of learner preference. Copyright © 2013 Elsevier Inc. All rights reserved.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-01-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…
Population responses in V1 encode different figures by response amplitude.
Gilad, Ariel; Slovin, Hamutal
2015-04-22
The visual system simultaneously segregates between several objects presented in a visual scene. The neural code for encoding different objects or figures is not well understood. To study this question, we trained two monkeys to discriminate whether two elongated bars are either separate, thus generating two different figures, or connected, thus generating a single figure. Using voltage-sensitive dyes, we imaged at high spatial and temporal resolution V1 population responses evoked by the two bars, while keeping their local attributes similar among the two conditions. In the separate condition, unlike the connected condition, the population response to one bar is enhanced, whereas the response to the other is simultaneously suppressed. The response to the background remained unchanged between the two conditions. This divergent pattern developed ∼200 ms poststimulus onset and could discriminate well between the separate and connected single trials. The stimulus separation saliency and behavioral report were highly correlated with the differential response to the bars. In addition, the proximity and/or the specific location of the connectors seemed to have only a weak effect on this late activity pattern, further supporting the involvement of top-down influences. Additional neural codes were less informative about the separate and connected conditions, with much less consistency and discriminability compared with a response amplitude code. We suggest that V1 is involved in the encoding of each figure by different neuronal response amplitude, which can mediate their segregation and perception. Copyright © 2015 the authors 0270-6474/15/356335-15$15.00/0.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.
Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L
2015-08-01
Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Working memory resources are shared across sensory modalities.
Salmela, V R; Moisala, M; Alho, K
2014-10-01
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.
Liu, Han-Hsuan
2016-01-01
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. SIGNIFICANCE STATEMENT Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. PMID:27383604
Liu, Han-Hsuan; Cline, Hollis T
2016-07-06
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. Copyright © 2016 the authors 0270-6474/16/367325-15$15.00/0.
Direct observation of mineral–organic composite formation reveals occlusion mechanism
Cho, Kang Rae; Kim, Yi -Yeoun; Yang, Pengcheng; ...
2016-01-06
Manipulation of inorganic materials with organic macromolecules enables organisms to create biominerals such as bones and seashells, where occlusion of biomacromolecules within individual crystals generates superior mechanical properties. Current understanding of this process largely comes from studying the entrapment of micron-size particles in cooling melts. Here, by investigating micelle incorporation in calcite with atomic force microscopy and micromechanical simulations, we show that different mechanisms govern nanoscale occlusion. By simultaneously visualizing the micelles and propagating step edges, we demonstrate that the micelles experience significant compression during occlusion, which is accompanied by cavity formation. This generates local lattice strain, leading to enhancedmore » mechanical properties. Furthermore, these results give new insight into the formation of occlusions in natural and synthetic crystals, and will facilitate the synthesis of multifunctional nanocomposite crystals.« less
Crossflow Stability and Transition Experiments in Swept-Wing Flow
NASA Technical Reports Server (NTRS)
Dagenhart, J. Ray; Saric, William S.
1999-01-01
An experimental examination of crossflow instability and transition on a 45deg swept wing was conducted in the Arizona State University Unsteady Wind Tunnel. The stationary-vortex pattern and transition location are visualized by using both sublimating chemical and liquid-crystal coatings. Extensive hot-wire measurements were obtained at several measurement stations across a single vortex track. The mean and travelling wave disturbances were measured simultaneously. Stationary crossflow disturbance profiles were determined by subtracting either a reference or a span-averaged velocity profile from the mean velocity data. Mean, stationary crossflow, and traveling wave velocity data were presented as local boundary layer profiles and contour plots across a single stationary crossflow vortex track. Disturbance mode profiles and growth rates were determined. The experimental data are compared with predictions from linear stability theory.
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-09-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.
fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.
Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W
2008-01-01
Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.
Multi-Robot FastSLAM for Large Domains
2007-03-01
Derr, D. Fox, A.B. Cremers , Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach...Intelligence (AAAI), 2000. 53. Andrew J. Davison and David W. Murray. Simultaneous Localization and Map- Building Using Active Vision. IEEE...Wyeth, Michael Milford and David Prasser. A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel
2012-05-01
Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia. Copyright © 2012 John Wiley & Sons, Ltd.
Wilson, A; Fram, D; Sistar, J
1981-06-01
An Imsai 8080 microcomputer is being used to simultaneously generate a color graphics stimulus display and to record visual-evoked cortical potentials. A brief description of the hardware and software developed for this system is presented. Data storage and analysis techniques are also discussed.
Panoramic Night Vision Goggle Testing For Diagnosis and Repair
2000-01-01
Visual Acuity Visual Acuity [ Marasco & Task, 1999] measures how well a human observer can see high contrast targets at specified light levels through...grid through the PNVG in-board and out-board channels simultaneously and comparing the defects to the size of grid features ( Marasco & Task, 1999). The
Bottom-up Attention Orienting in Young Children with Autism
ERIC Educational Resources Information Center
Amso, Dima; Haas, Sara; Tenenbaum, Elena; Markant, Julie; Sheinkopf, Stephen J.
2014-01-01
We examined the impact of simultaneous bottom-up visual influences and meaningful social stimuli on attention orienting in young children with autism spectrum disorders (ASDs). Relative to typically-developing age and sex matched participants, children with ASDs were more influenced by bottom-up visual scene information regardless of whether…
Teacher Vision: Expert and Novice Teachers' Perception of Problematic Classroom Management Scenes
ERIC Educational Resources Information Center
Wolff, Charlotte E.; Jarodzka, Halszka; van den Bogert, Niek; Boshuizen, Henny P. A.
2016-01-01
Visual expertise has been explored in numerous professions, but research on teachers' vision remains limited. Teachers' visual expertise is an important professional skill, particularly the ability to simultaneously perceive and interpret classroom situations for effective classroom management. This skill is complex and relies on an awareness of…
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
SIMPLE: a sequential immunoperoxidase labeling and erasing method.
Glass, George; Papin, Jason A; Mandell, James W
2009-10-01
The ability to simultaneously visualize expression of multiple antigens in cells and tissues can provide powerful insights into cellular and organismal biology. However, standard methods are limited to the use of just two or three simultaneous probes and have not been widely adopted for routine use in paraffin-embedded tissue. We have developed a novel approach called sequential immunoperoxidase labeling and erasing (SIMPLE) that enables the simultaneous visualization of at least five markers within a single tissue section. Utilizing the alcohol-soluble peroxidase substrate 3-amino-9-ethylcarbazole, combined with a rapid non-destructive method for antibody-antigen dissociation, we demonstrate the ability to erase the results of a single immunohistochemical stain while preserving tissue antigenicity for repeated rounds of labeling. SIMPLE is greatly facilitated by the use of a whole-slide scanner, which can capture the results of each sequential stain without any information loss.
Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei
2017-01-01
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Accelerating Demand Paging for Local and Remote Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David
2001-01-01
This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Michael W. M.; Phillips, Nicholas W.; van Riessen, Grant A.
2016-08-11
Owing to its extreme sensitivity, quantitative mapping of elemental distributionsviaX-ray fluorescence microscopy (XFM) has become a key microanalytical technique. The recent realisation of scanning X-ray diffraction microscopy (SXDM) meanwhile provides an avenue for quantitative super-resolved ultra-structural visualization. The similarity of their experimental geometries indicates excellent prospects for simultaneous acquisition. Here, in both step- and fly-scanning modes, robust, simultaneous XFM-SXDM is demonstrated.
Accessing and Visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, Bruce G.; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL 's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids These tools do one or more of the following tasks visualize local data sets for local users, visualize local data sets for remote users, and access and visualize remote data sets The tools are used for various types of data, including remotely sensed image data, digital elevation models, astronomical surveys, etc The paper attempts to pull some common elements out of these tools that may be useful for others who have to work with similarly large data sets.
Areas V1 and V2 show microsaccade-related 3-4-Hz covariation in gamma power and frequency.
Lowet, E; Roberts, M J; Bosman, C A; Fries, P; De Weerd, P
2016-05-01
Neuronal gamma-band synchronization (25-80 Hz) in visual cortex appears sustained and stable during prolonged visual stimulation when investigated with conventional averages across trials. However, recent studies in macaque visual cortex have used single-trial analyses to show that both power and frequency of gamma oscillations exhibit substantial moment-by-moment variation. This has raised the question of whether these apparently random variations might limit the functional role of gamma-band synchronization for neural processing. Here, we studied the moment-by-moment variation in gamma oscillation power and frequency, as well as inter-areal gamma synchronization, by simultaneously recording local field potentials in V1 and V2 of two macaque monkeys. We additionally analyzed electrocorticographic V1 data from a third monkey. Our analyses confirm that gamma-band synchronization is not stationary and sustained but undergoes moment-by-moment variations in power and frequency. However, those variations are neither random and nor a possible obstacle to neural communication. Instead, the gamma power and frequency variations are highly structured, shared between areas and shaped by a microsaccade-related 3-4-Hz theta rhythm. Our findings provide experimental support for the suggestion that cross-frequency coupling might structure and facilitate the information flow between brain regions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Task-relevant information is prioritized in spatiotemporal contextual cueing.
Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun
2016-11-01
Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.
Comprehension of Navigation Directions
NASA Technical Reports Server (NTRS)
Healy, Alice F.; Schneider, Vivian I.
2002-01-01
Subjects were shown navigation instructions varying in length directing them to move in a space represented by grids on a computer screen. They followed the instructions by clicking on the grids in the locations specified. Some subjects repeated back the instructions before following them, some did not, and others repeated back the instructions in reduced form, including only the critical words. The commands in each message were presented simultaneously for half of the subjects and sequentially for the others. For the longest messages, performance was better on the initial commands and worse on the final commands with simultaneous than with sequential presentation. Instruction repetition depressed performance, but reduced repetition removed this disadvantage. Effects of presentation format were attributed to visual scanning strategies. The advantage for reduced repetition was attributable either to enhanced visual scanning or to reduced output interference. A follow-up study with auditory presentation supported the visual scanning explanation.
Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka
2015-01-01
Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.
MacNevin, Christopher J; Toutchkine, Alexei; Marston, Daniel J; Hsu, Chia-Wen; Tsygankov, Denis; Li, Li; Liu, Bei; Qi, Timothy; Nguyen, Dan-Vinh; Hahn, Klaus M
2016-03-02
Biosensors that report endogenous protein activity in vivo can be based on environment-sensing fluorescent dyes. The dyes can be attached to reagents that bind selectively to a specific conformation of the targeted protein, such that binding leads to a fluorescence change. Dyes that are sufficiently bright for use at low, nonperturbing intracellular concentrations typically undergo changes in intensity rather than the shifts in excitation or emission maxima that would enable precise quantitation through ratiometric imaging. We report here mero199, an environment-sensing dye that undergoes a 33 nm solvent-dependent shift in excitation. The dye was used to generate a ratiometric biosensor of Cdc42 (CRIB199) without the need for additional fluorophores. CRIB199 was used in the same cell with a FRET sensor of Rac1 activation to simultaneously observe Cdc42 and Rac1 activity in cellular protrusions, indicating that Rac1 but not Cdc42 activity was reduced during tail retraction, and specific protrusions had reduced Cdc42 activity. A novel program (EdgeProps) used to correlate localized activation with cell edge dynamics indicated that Rac1 was specifically reduced during retraction.
Multicontrast photoacoustic in vivo imaging using near-infrared fluorescent proteins
NASA Astrophysics Data System (ADS)
Krumholz, Arie; Shcherbakova, Daria M.; Xia, Jun; Wang, Lihong V.; Verkhusha, Vladislav V.
2014-02-01
Non-invasive imaging of biological processes in vivo is invaluable in advancing biology. Photoacoustic tomography is a scalable imaging technique that provides higher resolution at greater depths in tissue than achievable by purely optical methods. Here we report the application of two spectrally distinct near-infrared fluorescent proteins, iRFP670 and iRFP720, engineered from bacterial phytochromes, as photoacoustic contrast agents. iRFPs provide tissue-specific contrast without the need for delivery of any additional substances. Compared to conventional GFP-like red-shifted fluorescent proteins, iRFP670 and iRFP720 demonstrate stronger photoacoustic signals at longer wavelengths, and can be spectrally resolved from each other and hemoglobin. We simultaneously visualized two differently labeled tumors, one with iRFP670 and the other with iRFP720, as well as blood vessels. We acquired images of a mouse as 2D sections of a whole animal, and as localized 3D volumetric images with high contrast and sub-millimeter resolution at depths up to 8 mm. Our results suggest iRFPs are genetically-encoded probes of choice for simultaneous photoacoustic imaging of several tissues or processes in vivo.
Imaging Local Ca2+ Signals in Cultured Mammalian Cells
Lock, Jeffrey T.; Ellefsen, Kyle L.; Settle, Bret; Parker, Ian; Smith, Ian F.
2015-01-01
Cytosolic Ca2+ ions regulate numerous aspects of cellular activity in almost all cell types, controlling processes as wide-ranging as gene transcription, electrical excitability and cell proliferation. The diversity and specificity of Ca2+ signaling derives from mechanisms by which Ca2+ signals are generated to act over different time and spatial scales, ranging from cell-wide oscillations and waves occurring over the periods of minutes to local transient Ca2+ microdomains (Ca2+ puffs) lasting milliseconds. Recent advances in electron multiplied CCD (EMCCD) cameras now allow for imaging of local Ca2+ signals with a 128 x 128 pixel spatial resolution at rates of >500 frames sec-1 (fps). This approach is highly parallel and enables the simultaneous monitoring of hundreds of channels or puff sites in a single experiment. However, the vast amounts of data generated (ca. 1 Gb per min) render visual identification and analysis of local Ca2+ events impracticable. Here we describe and demonstrate the procedures for the acquisition, detection, and analysis of local IP3-mediated Ca2+ signals in intact mammalian cells loaded with Ca2+ indicators using both wide-field epi-fluorescence (WF) and total internal reflection fluorescence (TIRF) microscopy. Furthermore, we describe an algorithm developed within the open-source software environment Python that automates the identification and analysis of these local Ca2+ signals. The algorithm localizes sites of Ca2+ release with sub-pixel resolution; allows user review of data; and outputs time sequences of fluorescence ratio signals together with amplitude and kinetic data in an Excel-compatible table. PMID:25867132
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Evaluation of Visual Computer Simulator for Computer Architecture Education
ERIC Educational Resources Information Center
Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio
2013-01-01
This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…
Hippocampus, Perirhinal Cortex, and Complex Visual Discriminations in Rats and Humans
ERIC Educational Resources Information Center
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.; Squire, Larry R.; Clark, Robert E.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with…
Visual Cues, Student Sex, Material Taught, and the Magnitude of Teacher Expectancy Effects.
ERIC Educational Resources Information Center
Badini, Aldo A.; Rosenthal, Robert
1989-01-01
Conducts an experiment on teacher expectancy effects to investigate the simultaneous effects of student gender, communication channel, and type of material taught (vocabulary and reasoning). Finds that the magnitude of teacher expectation effects was greater when students had access to visual cues, especially when the students were female. (MS)
Parallel Consolidation of Simple Features into Visual Short-Term Memory
ERIC Educational Resources Information Center
Mance, Irida; Becker, Mark W.; Liu, Taosheng
2012-01-01
Although considerable research has examined the storage limits of visual short-term memory (VSTM), little is known about the initial formation (i.e., the consolidation) of VSTM representations. A few previous studies have estimated the capacity of consolidation to be one item at a time. Here we used a sequential-simultaneous manipulation to…
Sensory Mode and "Information Load": Examining the Effects of Timing on Multisensory Processing.
ERIC Educational Resources Information Center
Tiene, Drew
2000-01-01
Discussion of the development of instructional multimedia materials focuses on a study of undergraduates that examined how the use of visual icons affected learning, differences in the instructional effectiveness of visual versus auditory processing of the same information, and timing (whether simultaneous or sequential presentation is more…
Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study
ERIC Educational Resources Information Center
Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle
2012-01-01
In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…
Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A
2012-08-01
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
Plasma dynamics and structural modifications induced by femtosecond laser pulses in quartz
NASA Astrophysics Data System (ADS)
Hernandez-Rueda, J.; Puerto, D.; Siegel, J.; Galvan-Sosa, M.; Solis, J.
2012-09-01
We have investigated plasma formation and relaxation dynamics induced by single femtosecond laser pulses at the surface of crystalline SiO2 (quartz) along with the corresponding topography modifications. The use of fs-resolved pump-probe microscopy allows combining spatial and temporal resolution and simultaneous access to phenomena occurring in adjacent regions excited with different local fluences. The results show the formation of a transient free-electron plasma ring surrounding the location of the inner ablation crater. Optical microscopy measurements reveal a 30% reflectivity decrease in this region, consistent with local amorphization. The accompanying weak depression of ≈15 nm in this region is explained by gentle material removal via Coulomb explosion. Finally, we discuss the timescales of the plasma dynamics and its role in the modifications produced, by comparing the results with previous studies obtained in amorphous SiO2 (fused silica). For this purpose, we have conceived a new representation concept of time-resolved microscopy image stacks in a single graph, which allows visualizing quickly suble differences of the overall similar dynamic response of both materials.
Thermal distribution in biological tissue at laser induced fluorescence and photodynamic therapy
NASA Astrophysics Data System (ADS)
Krasnikov, I. V.; Seteikin, A. Yu.; Drakaki, E.; Makropoulou, M.
2012-03-01
Laser induced fluorescence spectroscopy and photodynamic therapy (PDT) are techniques currently introduced in clinical applications for visualization and local destruction of malignant tumours as well as premalignant lesions. During the laser irradiation of tissues for the diagnostic and therapeutic purposes, the absorbed optical energy generates heat, although the power density of the treatment light for surface illumination is normally low enough not to cause any significantly increased tissue temperature. In this work we tried to evaluate the utility of Monte Carlo modeling for simulating the temperature fields and the dynamics of heat conduction into the skin tissue under several laser irradiation conditions with both a pulsed UV laser and a continuous wave visible laser beam. The analysis of the results showed that heat is not localized on the surface, but it is collected inside the tissue. By varying the boundary conditions on the surface and the type of the laser radiation (continuous or pulsed) we can reach higher than normal temperature inside the tissue without simultaneous formation of thermally damaged tissue (e.g. coagulation or necrosis zone).
Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering
NASA Astrophysics Data System (ADS)
Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.
2010-04-01
The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.
Arvanitis, Costas D.; McDannold, Nathan
2013-01-01
Purpose: Ultrasound can be used to noninvasively produce different bioeffects via viscous heating, acoustic cavitation, or their combination, and these effects can be exploited to develop a wide range of therapies for cancer and other disorders. In order to accurately localize and control these different effects, imaging methods are desired that can map both temperature changes and cavitation activity. To address these needs, the authors integrated an ultrasound imaging array into an MRI-guided focused ultrasound (MRgFUS) system to simultaneously visualize thermal and mechanical effects via passive acoustic mapping (PAM) and MR temperature imaging (MRTI), respectively. Methods: The system was tested with an MRgFUS system developed for transcranial sonication for brain tumor ablation in experiments with a tissue mimicking phantom and a phantom-filled ex vivo macaque skull. In experiments on cavitation-enhanced heating, 10 s continuous wave sonications were applied at increasing power levels (30–110 W) until broadband acoustic emissions (a signature for inertial cavitation) were evident. The presence or lack of signal in the PAM, as well as its magnitude and location, were compared to the focal heating in the MRTI. Additional experiments compared PAM with standard B-mode ultrasound imaging and tested the feasibility of the system to map cavitation activity produced during low-power (5 W) burst sonications in a channel filled with a microbubble ultrasound contrast agent. Results: When inertial cavitation was evident, localized activity was present in PAM and a marked increase in heating was observed in MRTI. The location of the cavitation activity and heating agreed on average after registration of the two imaging modalities; the distance between the maximum cavitation activity and focal heating was −3.4 ± 2.1 mm and −0.1 ± 3.3 mm in the axial and transverse ultrasound array directions, respectively. Distortions and other MRI issues introduced small uncertainties in the PAM/MRTI registration. Although there was substantial variation, a nonlinear relationship between the average intensity of the cavitation maps, which was relatively constant during sonication, and the peak temperature rise was evident. A fit to the data to an exponential had a correlation coefficient (R2) of 0.62. The system was also found to be capable of visualizing cavitation activity with B-mode imaging and of passively mapping cavitation activity transcranially during cavitation-enhanced heating and during low-power sonication with an ultrasound contrast agent. Conclusions: The authors have demonstrated the feasibility of integrating an ultrasound imaging array into an MRgFUS system to simultaneously map localized cavitation activity and temperature. The authors anticipate that this integrated approach can be utilized to develop controllers for cavitation-enhanced ablation and facilitate the optimization and development of this and other ultrasound therapies. The integrated system may also provide a useful tool to study the bioeffects of acoustic cavitation. PMID:24320468
Arvanitis, Costas D; McDannold, Nathan
2013-11-01
Ultrasound can be used to noninvasively produce different bioeffects via viscous heating, acoustic cavitation, or their combination, and these effects can be exploited to develop a wide range of therapies for cancer and other disorders. In order to accurately localize and control these different effects, imaging methods are desired that can map both temperature changes and cavitation activity. To address these needs, the authors integrated an ultrasound imaging array into an MRI-guided focused ultrasound (MRgFUS) system to simultaneously visualize thermal and mechanical effects via passive acoustic mapping (PAM) and MR temperature imaging (MRTI), respectively. The system was tested with an MRgFUS system developed for transcranial sonication for brain tumor ablation in experiments with a tissue mimicking phantom and a phantom-filled ex vivo macaque skull. In experiments on cavitation-enhanced heating, 10 s continuous wave sonications were applied at increasing power levels (30-110 W) until broadband acoustic emissions (a signature for inertial cavitation) were evident. The presence or lack of signal in the PAM, as well as its magnitude and location, were compared to the focal heating in the MRTI. Additional experiments compared PAM with standard B-mode ultrasound imaging and tested the feasibility of the system to map cavitation activity produced during low-power (5 W) burst sonications in a channel filled with a microbubble ultrasound contrast agent. When inertial cavitation was evident, localized activity was present in PAM and a marked increase in heating was observed in MRTI. The location of the cavitation activity and heating agreed on average after registration of the two imaging modalities; the distance between the maximum cavitation activity and focal heating was -3.4 ± 2.1 mm and -0.1 ± 3.3 mm in the axial and transverse ultrasound array directions, respectively. Distortions and other MRI issues introduced small uncertainties in the PAM∕MRTI registration. Although there was substantial variation, a nonlinear relationship between the average intensity of the cavitation maps, which was relatively constant during sonication, and the peak temperature rise was evident. A fit to the data to an exponential had a correlation coefficient (R(2)) of 0.62. The system was also found to be capable of visualizing cavitation activity with B-mode imaging and of passively mapping cavitation activity transcranially during cavitation-enhanced heating and during low-power sonication with an ultrasound contrast agent. The authors have demonstrated the feasibility of integrating an ultrasound imaging array into an MRgFUS system to simultaneously map localized cavitation activity and temperature. The authors anticipate that this integrated approach can be utilized to develop controllers for cavitation-enhanced ablation and facilitate the optimization and development of this and other ultrasound therapies. The integrated system may also provide a useful tool to study the bioeffects of acoustic cavitation.
Shibata, Naoya; Findlay, Scott D; Matsumoto, Takao; Kohno, Yuji; Seki, Takehito; Sánchez-Santolino, Gabriel; Ikuhara, Yuichi
2017-07-18
The functional properties of materials and devices are critically determined by the electromagnetic field structures formed inside them, especially at nanointerface and surface regions, because such structures are strongly associated with the dynamics of electrons, holes and ions. To understand the fundamental origin of many exotic properties in modern materials and devices, it is essential to directly characterize local electromagnetic field structures at such defect regions, even down to atomic dimensions. In recent years, rapid progress in the development of high-speed area detectors for aberration-corrected scanning transmission electron microscopy (STEM) with sub-angstrom spatial resolution has opened new possibilities to directly image such electromagnetic field structures at very high-resolution. In this Account, we give an overview of our recent development of differential phase contrast (DPC) microscopy for aberration-corrected STEM and its application to many materials problems. In recent years, we have developed segmented-type STEM detectors which divide the detector plane into 16 segments and enable simultaneous imaging of 16 STEM images which are sensitive to the positions and angles of transmitted/scattered electrons on the detector plane. These detectors also have atomic-resolution imaging capability. Using these segmented-type STEM detectors, we show DPC STEM imaging to be a very powerful tool for directly imaging local electromagnetic field structures in materials and devices in real space. For example, DPC STEM can clearly visualize the local electric field variation due to the abrupt potential change across a p-n junction in a GaAs semiconductor, which cannot be observed by normal in-focus bright-field or annular type dark-field STEM imaging modes. DPC STEM is also very effective for imaging magnetic field structures in magnetic materials, such as magnetic domains and skyrmions. Moreover, real-time imaging of electromagnetic field structures can now be realized through very fast data acquisition, processing, and reconstruction algorithms. If we use DPC STEM for atomic-resolution imaging using a sub-angstrom size electron probe, it has been shown that we can directly observe the atomic electric field inside atoms within crystals and even inside single atoms, the field between the atomic nucleus and the surrounding electron cloud, which possesses information about the atomic species, local chemical bonding and charge redistribution between bonded atoms. This possibility may open an alternative way for directly visualizing atoms and nanostructures, that is, seeing atoms as an entity of electromagnetic fields that reflect the intra- and interatomic electronic structures. In this Account, the current status of aberration-corrected DPC STEM is highlighted, along with some applications in real material and device studies.
Global processing takes time: A meta-analysis on local-global visual processing in ASD.
Van der Hallen, Ruth; Evers, Kris; Brewaeys, Katrien; Van den Noortgate, Wim; Wagemans, Johan
2015-05-01
What does an individual with autism spectrum disorder (ASD) perceive first: the forest or the trees? In spite of 30 years of research and influential theories like the weak central coherence (WCC) theory and the enhanced perceptual functioning (EPF) account, the interplay of local and global visual processing in ASD remains only partly understood. Research findings vary in indicating a local processing bias or a global processing deficit, and often contradict each other. We have applied a formal meta-analytic approach and combined 56 articles that tested about 1,000 ASD participants and used a wide range of stimuli and tasks to investigate local and global visual processing in ASD. Overall, results show no enhanced local visual processing nor a deficit in global visual processing. Detailed analysis reveals a difference in the temporal pattern of the local-global balance, that is, slow global processing in individuals with ASD. Whereas task-dependent interaction effects are obtained, gender, age, and IQ of either participant groups seem to have no direct influence on performance. Based on the overview of the literature, suggestions are made for future research. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
ERIC Educational Resources Information Center
Türk, Emine; Erçetin, Gülcan
2014-01-01
This study examines the effects of interactive versus simultaneous display of visual and verbal multimedia information on incidental vocabulary learning and reading comprehension of learners of English with lower proficiency levels. In the interactive display condition, learners were allowed to select the type of multimedia information whereas the…
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
Intelligent visual localization of wireless capsule endoscopes enhanced by color information.
Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2017-10-01
Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dzyubachyk, Oleh; Khmelinskii, Artem; Plenge, Esben; Kok, Peter; Snoeks, Thomas J A; Poot, Dirk H J; Löwik, Clemens W G M; Botha, Charl P; Niessen, Wiro J; van der Weerd, Louise; Meijering, Erik; Lelieveldt, Boudewijn P F
2014-01-01
In small animal imaging studies, when the locations of the micro-structures of interest are unknown a priori, there is a simultaneous need for full-body coverage and high resolution. In MRI, additional requirements to image contrast and acquisition time will often make it impossible to acquire such images directly. Recently, a resolution enhancing post-processing technique called super-resolution reconstruction (SRR) has been demonstrated to improve visualization and localization of micro-structures in small animal MRI by combining multiple low-resolution acquisitions. However, when the field-of-view is large relative to the desired voxel size, solving the SRR problem becomes very expensive, in terms of both memory requirements and computation time. In this paper we introduce a novel local approach to SRR that aims to overcome the computational problems and allow researchers to efficiently explore both global and local characteristics in whole-body small animal MRI. The method integrates state-of-the-art image processing techniques from the areas of articulated atlas-based segmentation, planar reformation, and SRR. A proof-of-concept is provided with two case studies involving CT, BLI, and MRI data of bone and kidney tumors in a mouse model. We show that local SRR-MRI is a computationally efficient complementary imaging modality for the precise characterization of tumor metastases, and that the method provides a feasible high-resolution alternative to conventional MRI.
Acoustic Tactile Representation of Visual Information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa
Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked subjects. The results are evaluated in terms of accuracy and speed, and they demonstrate the advantages of spatial sound for guiding the scanning finger or pointer in shape perception, object localization, and layout exploration. We show that these advantages increase with the amount of detail (smaller object size) in the display. Our experimental results show that the proposed system outperforms the state of the art in shape perception, including variable friction displays. We also demonstrate that, even though they are currently available only as static overlays, raised dot patterns provide the best shape rendition in terms of both the accuracy and speed. Our experiments with layout rendering and perception demonstrate that simultaneous representation of objects, using the most effective approaches for directionality and distance rendering, approaches the optimal performance level provided by visual layout perception. Finally, experiments with the virtual cane and Venn diagram configurations demonstrate that the proposed techniques can be used effectively in simple but nontrivial real-world applications. One of the most important conclusions of our experiments is that there is a clear performance gap between experienced and inexperienced subjects, which indicates that there is a lot of room for improvement with appropriate and extensive training. By exploring a wide variety of design alternatives and focusing on different aspects of the acoustic-tactile interfaces, our results offer many valuable insights and great promise for the design of future systematic tests visually impaired and visually blocked subjects, utilizing the most effective configurations.
Perception of shapes targeting local and global processes in autism spectrum disorders.
Grinter, Emma J; Maybery, Murray T; Pellicano, Elizabeth; Badcock, Johanna C; Badcock, David R
2010-06-01
Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and global form processing ability. Within the visual domain, radial frequency (RF) patterns - shapes formed by sinusoidally varying the radius of a circle to add 'bumps' of a certain number to a circle - can be used to examine local and global form perception. Typically developing children and children with an ASD discriminated between circles and RF patterns that are processed either locally (RF24) or globally (RF3). Children with an ASD required greater shape deformation to identify RF3 shapes compared to typically developing children, consistent with difficulty in global processing in the ventral stream. No group difference was observed for RF24 shapes, suggesting intact local ventral-stream processing. These outcomes support the position that a deficit in global visual processing is present in ASDs, consistent with the notion of Weak Central Coherence.
Purdon, Patrick L.; Millan, Hernan; Fuller, Peter L.; Bonmassar, Giorgio
2008-01-01
Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open source system for simultaneous electrophysiology and fMRI featuring low-noise (< 0.6 uV p-p input noise), electromagnetic compatibility for MRI (tested up to 7 Tesla), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has used in human EEG/fMRI studies at 3 and 7 Tesla examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3 Tesla fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level. PMID:18761038
Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio
2008-11-15
Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (<0.6microV p-p input noise), electromagnetic compatibility for MRI (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.
A Small World of Neuronal Synchrony
Yu, Shan; Huang, Debin; Singer, Wolf
2008-01-01
A small-world network has been suggested to be an efficient solution for achieving both modular and global processing—a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population's activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of “hubs” in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding. PMID:18400792
Stimulus Load and Oscillatory Activity in Higher Cortex
Kornblith, Simon; Buschman, Timothy J.; Miller, Earl K.
2016-01-01
Exploring and exploiting a rich visual environment requires perceiving, attending, and remembering multiple objects simultaneously. Recent studies have suggested that this mental “juggling” of multiple objects may depend on oscillatory neural dynamics. We recorded local field potentials from the lateral intraparietal area, frontal eye fields, and lateral prefrontal cortex while monkeys maintained variable numbers of visual stimuli in working memory. Behavior suggested independent processing of stimuli in each hemifield. During stimulus presentation, higher-frequency power (50–100 Hz) increased with the number of stimuli (load) in the contralateral hemifield, whereas lower-frequency power (8–50 Hz) decreased with the total number of stimuli in both hemifields. During the memory delay, lower-frequency power increased with contralateral load. Load effects on higher frequencies during stimulus encoding and lower frequencies during the memory delay were stronger when neural activity also signaled the location of the stimuli. Like power, higher-frequency synchrony increased with load, but beta synchrony (16–30 Hz) showed the opposite effect, increasing when power decreased (stimulus presentation) and decreasing when power increased (memory delay). Our results suggest roles for lower-frequency oscillations in top-down processing and higher-frequency oscillations in bottom-up processing. PMID:26286916
a Mapping Method of Slam Based on Look up Table
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, J.; Wang, A.; Wang, J.
2017-09-01
In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.
CAMBerVis: visualization software to support comparative analysis of multiple bacterial strains.
Woźniak, Michał; Wong, Limsoon; Tiuryn, Jerzy
2011-12-01
A number of inconsistencies in genome annotations are documented among bacterial strains. Visualization of the differences may help biologists to make correct decisions in spurious cases. We have developed a visualization tool, CAMBerVis, to support comparative analysis of multiple bacterial strains. The software manages simultaneous visualization of multiple bacterial genomes, enabling visual analysis focused on genome structure annotations. The CAMBerVis software is freely available at the project website: http://bioputer.mimuw.edu.pl/camber. Input datasets for Mycobacterium tuberculosis and Staphylocacus aureus are integrated with the software as examples. m.wozniak@mimuw.edu.pl Supplementary data are available at Bioinformatics online.
Crossflow Stability and Transition Experiments in a Swept-Wing Flow. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dagenhart, John Ray
1992-01-01
An experimental examination of crossflow instability and transition on a 45 degree swept wing is conducted in the Arizona State University Unsteady Wind Tunnel. The stationary-vortex pattern and transition location are visualized using both sublimating-chemical and liquid-crystal coatings. Extensive hot-wire measurements are conducted at several measurement stations across a single vortex track. The mean and travelling-wave disturbances are measured simultaneously. Stationary-crossflow disturbance profiles are determined by subtracting either a reference or a span-averaged velocity profile from the mean-velocity data. Mean, stationary-crossflow, and travelling-wave velocity data are presented as local boundary-layer profiles and as contour plots across a single stationary-crossflow vortex track. Disturbance-mode profiles and growth rates are determined. The experimental data are compared to predictions from linear stability theory.
A Review of Distributed Control Techniques for Power Quality Improvement in Micro-grids
NASA Astrophysics Data System (ADS)
Zeeshan, Hafiz Muhammad Ali; Nisar, Fatima; Hassan, Ahmad
2017-05-01
Micro-grid is typically visualized as a small scale local power supply network dependent on distributed energy resources (DERs) that can operate simultaneously with grid as well as in standalone manner. The distributed generator of a micro-grid system is usually a converter-inverter type topology acting as a non-linear load, and injecting harmonics into the distribution feeder. Hence, the negative effects on power quality by the usage of distributed generation sources and components are clearly witnessed. In this paper, a review of distributed control approaches for power quality improvement is presented which encompasses harmonic compensation, loss mitigation and optimum power sharing in multi-source-load distributed power network. The decentralized subsystems for harmonic compensation and active-reactive power sharing accuracy have been analysed in detail. Results have been validated to be consistent with IEEE standards.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
Shabbott, Britne A; Sainburg, Robert L
2010-05-01
Visuomotor adaptation is mediated by errors between intended and sensory-detected arm positions. However, it is not clear whether visual-based errors that are shown during the course of motion lead to qualitatively different or more efficient adaptation than errors shown after movement. For instance, continuous visual feedback mediates online error corrections, which may facilitate or inhibit the adaptation process. We addressed this question by manipulating the timing of visual error information and task instructions during a visuomotor adaptation task. Subjects were exposed to a visuomotor rotation, during which they received continuous visual feedback (CF) of hand position with instructions to correct or not correct online errors, or knowledge-of-results (KR), provided as a static hand-path at the end of each trial. Our results showed that all groups improved performance with practice, and that online error corrections were inconsequential to the adaptation process. However, in contrast to the CF groups, the KR group showed relatively small reductions in mean error with practice, increased inter-trial variability during rotation exposure, and more limited generalization across target distances and workspace. Further, although the KR group showed improved performance with practice, after-effects were minimal when the rotation was removed. These findings suggest that simultaneous visual and proprioceptive information is critical in altering neural representations of visuomotor maps, although delayed error information may elicit compensatory strategies to offset perturbations.
Tianxiao Jiang; Siddiqui, Hasan; Ray, Shruti; Asman, Priscella; Ozturk, Musa; Ince, Nuri F
2017-07-01
This paper presents a portable platform to collect and review behavioral data simultaneously with neurophysiological signals. The whole system is comprised of four parts: a sensor data acquisition interface, a socket server for real-time data streaming, a Simulink system for real-time processing and an offline data review and analysis toolbox. A low-cost microcontroller is used to acquire data from external sensors such as accelerometer and hand dynamometer. The micro-controller transfers the data either directly through USB or wirelessly through a bluetooth module to a data server written in C++ for MS Windows OS. The data server also interfaces with the digital glove and captures HD video from webcam. The acquired sensor data are streamed under User Datagram Protocol (UDP) to other applications such as Simulink/Matlab for real-time analysis and recording. Neurophysiological signals such as electroencephalography (EEG), electrocorticography (ECoG) and local field potential (LFP) recordings can be collected simultaneously in Simulink and fused with behavioral data. In addition, we developed a customized Matlab Graphical User Interface (GUI) software to review, annotate and analyze the data offline. The software provides a fast, user-friendly data visualization environment with synchronized video playback feature. The software is also capable of reviewing long-term neural recordings. Other featured functions such as fast preprocessing with multithreaded filters, annotation, montage selection, power-spectral density (PSD) estimate, time-frequency map and spatial spectral map are also implemented.
Perception of Shapes Targeting Local and Global Processes in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Grinter, Emma J.; Maybery, Murray T.; Pellicano, Elizabeth; Badcock, Johanna C.; Badcock, David R.
2010-01-01
Background: Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and…
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
CellMap visualizes protein-protein interactions and subcellular localization
Dallago, Christian; Goldberg, Tatyana; Andrade-Navarro, Miguel Angel; Alanis-Lobato, Gregorio; Rost, Burkhard
2018-01-01
Many tools visualize protein-protein interaction (PPI) networks. The tool introduced here, CellMap, adds one crucial novelty by visualizing PPI networks in the context of subcellular localization, i.e. the location in the cell or cellular component in which a PPI happens. Users can upload images of cells and define areas of interest against which PPIs for selected proteins are displayed (by default on a cartoon of a cell). Annotations of localization are provided by the user or through our in-house database. The visualizer and server are written in JavaScript, making CellMap easy to customize and to extend by researchers and developers. PMID:29497493
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Application of polymer sensitive MRI sequence to localization of EEG electrodes.
Butler, Russell; Gilbert, Guillaume; Descoteaux, Maxime; Bernier, Pierre-Michel; Whittingstall, Kevin
2017-02-15
The growing popularity of simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) opens up the possibility of imaging EEG electrodes while the subject is in the scanner. Such information could be useful for improving the fusion of EEG-fMRI datasets. Here, we report for the first time how an ultra-short echo time (UTE) MR sequence can image the materials of an MR-compatible EEG cap, finding that electrodes and some parts of the wiring are visible in a high resolution UTE. Using these images, we developed a segmentation procedure to obtain electrode coordinates based on voxel intensity from the raw UTE, using hand labeled coordinates as the starting point. We were able to visualize and segment 95% of EEG electrodes using a short (3.5min) UTE sequence. We provide scripts and template images so this approach can now be easily implemented to obtain precise, subject-specific EEG electrode positions while adding minimal acquisition time to the simultaneous EEG-fMRI protocol. T1 gel artifacts are not robust enough to localize all electrodes across subjects, the polymers composing Brainvision cap electrodes are not visible on a T1, and adding T1 visible materials to the EEG cap is not always possible. We therefore consider our method superior to existing methods for obtaining electrode positions in the scanner, as it is hardware free and should work on a wide range of materials (caps). EEG electrode positions are obtained with high precision and no additional hardware. Copyright © 2016 Elsevier B.V. All rights reserved.
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.
Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal
2016-01-01
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
Intuitive representation of surface properties of biomolecules using BioBlender.
Andrei, Raluca Mihaela; Callieri, Marco; Zini, Maria Francesca; Loni, Tiziana; Maraziti, Giuseppe; Pan, Mike Chen; Zoppè, Monica
2012-03-28
In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu.
Intuitive representation of surface properties of biomolecules using BioBlender
2012-01-01
Background In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Methods Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. Results A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Conclusions Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu PMID:22536962
Behavioral and Physiological Findings of Gender Differences in Global-Local Visual Processing
ERIC Educational Resources Information Center
Roalf, David; Lowery, Natasha; Turetsky, Bruce I.
2006-01-01
Hemispheric asymmetries in global-local visual processing are well-established, as are gender differences in cognition. Although hemispheric asymmetry presumably underlies gender differences in cognition, the literature on gender differences in global-local processing is sparse. We employed event related brain potential (ERP) recordings during…
ERIC Educational Resources Information Center
Nosofsky, Robert M.; Donkin, Chris
2016-01-01
We report an experiment designed to provide a qualitative contrast between knowledge-limited versions of mixed-state and variable-resources (VR) models of visual change detection. The key data pattern is that observers often respond "same" on big-change trials, while simultaneously being able to discriminate between same and small-change…
A Case Study of Diverse Multimodal Influences on Music Improvisation Using Visual Methodology
ERIC Educational Resources Information Center
Tomlinson, Michelle M.
2016-01-01
This case study employed multimodal methods and visual analysis to explore how a young multilingual student used music improvisation to form a speech rap. This student, recently arrived in Australia from Ethiopia, created piano music that was central to his music identity and that simultaneously, through dialogue with his mother, enhanced his…
Spatiotemporal Proximity Effects in Visual Short-Term Memory Examined by Target-Nontarget Analysis
ERIC Educational Resources Information Center
Sapkota, Raju P.; Pardhan, Shahina; van der Linde, Ian
2016-01-01
Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the…
Mechanisms of Sediment Entrainment and Transport in Rotorcraft Brownout
2009-01-01
understanding of the temporal evolution of the rotor wake in ground effect simultaneously with the processes of sediment entrainment and transport by the rotor ...14 1.8 Schematic and smoke flow visualization of a rotor flow during out-of- ground- effect ...operations. . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.9 Schematic and smoke flow visualization of a rotor flow during in-ground- effect
ERIC Educational Resources Information Center
Morey, Candice C.; Miron, Monica D.
2016-01-01
Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative…
ERIC Educational Resources Information Center
Considine, David M.; Haley, Gail E.
This book argues that people live simultaneously in two different cultures. Values of the first culture are imparted to children through curriculum in the nation's public school classrooms. The second culture is the world of mass communication that promotes consumption, instant gratification, and impulse. The clash between these cultures confronts…
Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds
ERIC Educational Resources Information Center
Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.
2011-01-01
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…
An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.
Magen, Hagit
2017-03-01
Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.
Rojas-Líbano, Daniel; Wimmer Del Solar, Jonathan; Aguilar-Rivera, Marcelo; Montefusco-Siegmund, Rodrigo; Maldonado, Pedro Esteban
2018-05-16
An important unresolved question about neural processing is the mechanism by which distant brain areas coordinate their activities and relate their local processing to global neural events. A potential candidate for the local-global integration are slow rhythms such as respiration. In this article, we asked if there are modulations of local cortical processing which are phase-locked to (peripheral) sensory-motor exploratory rhythms. We studied rats on an elevated platform where they would spontaneously display exploratory and rest behaviors. Concurrent with behavior, we monitored whisking through EMG and the respiratory rhythm from the olfactory bulb (OB) local field potential (LFP). We also recorded LFPs from dorsal hippocampus, primary motor cortex, primary somatosensory cortex and primary visual cortex. We defined exploration as simultaneous whisking and sniffing above 5 Hz and found that this activity peaked at about 8 Hz. We considered rest as the absence of whisking and sniffing, and in this case, respiration occurred at about 3 Hz. We found a consistent shift across all areas toward these rhythm peaks accompanying behavioral changes. We also found, across areas, that LFP gamma (70-100 Hz) amplitude could phase-lock to the animal's OB respiratory rhythm, a finding indicative of respiration-locked changes in local processing. In a subset of animals, we also recorded the hippocampal theta activity and found that occurred at frequencies overlapped with respiration but was not spectrally coherent with it, suggesting a different oscillator. Our results are consistent with the notion of respiration as a binder or integrator of activity between brain regions.
Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M
2013-08-01
An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.
The primary visual cortex in the neural circuit for visual orienting
NASA Astrophysics Data System (ADS)
Zhaoping, Li
The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.
Experimenter's Laboratory for Visualized Interactive Science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.
1994-01-01
ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.
Davis, Matthew A Cody; Spriggs, Amy; Rodgers, Alexis; Campbell, Jonathan
2018-06-01
Deficits in social skills are often exhibited in individuals with comorbid Down syndrome (DS) and autism spectrum disorder (ASD), and there is a paucity of research to help guide intervention for this population. In the present study, a multiple probe study across behaviors, replicated across participants, assessed the effectiveness of peer-delivered simultaneous prompting in teaching socials skills to adults with DS-ASD using visual analysis techniques and Tau-U statistics to measure effect. Peer-mediators with DS and intellectual disability (ID) delivered simultaneous prompting sessions reliably (i.e., > 80% reliability) to teach social skills to adults with ID and a dual-diagnoses of DS-ASD with small (Tau Weighted = .55, 90% CI [.29, .82]) to medium effects (Tau Weighted = .75, 90% CI [.44, 1]). Statistical and visual analysis findings suggest a promising social skills intervention for individuals with DS-ASD as well as reliable delivery of simultaneous prompting procedures by individuals with DS.
The Role of Visual Processing Speed in Reading Speed Development
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117
The role of visual processing speed in reading speed development.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Tuwairqi, Waleed S; Sinjab, Mazen M
2012-05-01
To evaluate 1-year visual and topographic outcomes and safety and efficacy of corneal collagen cross-linking (CXL) combined with topography-guided photorefractive keratectomy (TG-PRK) to achieve near emmetropia in eyes with low-grade keratoconus. Twenty-two eyes from 15 patients (11 women, 4 men) were included in a prospective, nonrandomized, noncontrolled clinical study. Mean patient age was 26.6±6.07 years (range: 19 to 40 years). Inclusion criteria were low-grade keratoconus with evidence of progression, transparent cornea, corrected distance visual acuity (CDVA) 0.8 (decimal) or better, corneal thickness >440 μm, and maximum keratometry readings (K-max) <51.00 diopters (D). All patients underwent simultaneous TG-PRK with CXL. Study parameters were uncorrected distance visual acuity, CDVA, manifest refractive error, manifest and topographic (corneal) astigmatism, patient satisfaction, and efficacy and safety of the treatment. Follow-up was 1 year. After 1 year, statistically significant improvement was noted in all study parameters (P<.01). The safety and efficacy indices were 1.6 and 0.4, respectively. Patient satisfaction questionnaire showed that 91% were satisfied, 9% were not completely satisfied but believed they improved, and none were dissatisfied. Corneal topography demonstrated significant improvement in 55%, improvement in 36%, and minor improvement in 9% of cases. No cases progressed as evidenced by keratometry readings. Simultaneous TG-PRK with CXL is an effective and safe treatment with remarkable visual and topographic outcomes in patients with low-grade keratoconus who meet the recommended inclusion criteria. Copyright 2012, SLACK Incorporated.
An exploratory study of the potential of LIBS for visualizing gunshot residue patterns.
López-López, María; Alvarez-Llamas, César; Pisonero, Jorge; García-Ruiz, Carmen; Bordel, Nerea
2017-04-01
The study of gunshot residue (GSR) patterns can assist in the reconstruction of shooting incidences. Currently, there is a real need of methods capable of furnishing simultaneous elemental analysis with higher specificity for the GSR pattern visualization. Laser-Induced Breakdown Spectroscopy (LIBS) provides a multi-elemental analysis of the sample, requiring very small amounts of material and no sample preparation. Due to these advantages, this study aims at exploring the potential of LIBS imaging for the visualization of GSR patterns. After the spectral characterization of individual GSR particles, the distribution of Pb, Sb and Ba over clothing targets, shot from different distances, were measured in laser raster mode. In particular, an array of spots evenly spaced at 800μm, using a stage displacement velocity of 4mm/s and a laser frequency of 5Hz was employed (e.g. an area of 130×165mm 2 was measured in less than 3h). A LIBS set-up based on the simultaneous use of two spectrographs with iCCD cameras and a motorized stage was used. This set-up allows obtaining information from two different wavelength regions (258-289 and 446-463nm) from the same laser induced plasma, enabling the simultaneous detection of the three characteristic elements (Pb, Sb, and Ba) of GSR particles from conventional ammunitions. The ability to visualize the 2D distribution GSR pattern by LIBS may have an important application in the forensic field, especially for the ballistics area. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-focused geospatial analysis using probes.
Butkiewicz, Thomas; Dou, Wenwen; Wartell, Zachary; Ribarsky, William; Chang, Remco
2008-01-01
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li
2013-01-01
Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378
MacIntosh, Bradley J.; Baker, S. Nicole; Mraz, Richard; Ives, John R.; Martel, Anne L.; McIlroy, William E.; Graham, Simon J.
2016-01-01
Specially designed optoelectronic and data postprocessing methods are described that permit electromyography (EMG) of muscle activity simultaneous with functional MRI (fMRI). Hardware characterization and validation included simultaneous EMG and event-related fMRI in 17 healthy participants during either ankle (n = 12), index finger (n = 3), or wrist (n = 2) contractions cued by visual stimuli. Principal component analysis (PCA) and independent component analysis (ICA) were evaluated for their ability to remove residual fMRI gradient-induced signal contamination in EMG data. Contractions of ankle tibialis anterior and index finger abductor were clearly distinguishable, although observing contractions from the wrist flexors proved more challenging. To demonstrate the potential utility of simultaneous EMG and fMRI, data from the ankle experiments were analyzed using two approaches: 1) assuming contractions coincided precisely with visual cues, and 2) using EMG to time the onset and offset of muscle contraction precisely for each participant. Both methods produced complementary activation maps, although the EMG-guided approach recovered more active brain voxels and revealed activity better in the basal ganglia and cerebellum. Furthermore, numerical simulations confirmed that precise knowledge of behavioral responses, such as those provided by EMG, are much more important for event-related experimental designs compared to block designs. This simultaneous EMG and fMRI methodology has important applications where the amplitude or timing of motor output is impaired, such as after stroke. PMID:17133382
MacIntosh, Bradley J; Baker, S Nicole; Mraz, Richard; Ives, John R; Martel, Anne L; McIlroy, William E; Graham, Simon J
2007-09-01
Specially designed optoelectronic and data postprocessing methods are described that permit electromyography (EMG) of muscle activity simultaneous with functional MRI (fMRI). Hardware characterization and validation included simultaneous EMG and event-related fMRI in 17 healthy participants during either ankle (n = 12), index finger (n = 3), or wrist (n = 2) contractions cued by visual stimuli. Principal component analysis (PCA) and independent component analysis (ICA) were evaluated for their ability to remove residual fMRI gradient-induced signal contamination in EMG data. Contractions of ankle tibialis anterior and index finger abductor were clearly distinguishable, although observing contractions from the wrist flexors proved more challenging. To demonstrate the potential utility of simultaneous EMG and fMRI, data from the ankle experiments were analyzed using two approaches: 1) assuming contractions coincided precisely with visual cues, and 2) using EMG to time the onset and offset of muscle contraction precisely for each participant. Both methods produced complementary activation maps, although the EMG-guided approach recovered more active brain voxels and revealed activity better in the basal ganglia and cerebellum. Furthermore, numerical simulations confirmed that precise knowledge of behavioral responses, such as those provided by EMG, are much more important for event-related experimental designs compared to block designs. This simultaneous EMG and fMRI methodology has important applications where the amplitude or timing of motor output is impaired, such as after stroke. (c) 2006 Wiley-Liss, Inc.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-12-21
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-10-01
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Pellegrino, J W; Siegel, A W; Dhawan, M
1976-01-01
Picture and word triads were tested in a Brown-Peterson short-term retention task at varying delay intervals (3, 10, or 30 sec) and under acoustic and simultaneous acoustic and visual distraction. Pictures were superior to words at all delay intervals under single acoustic distraction. Dual distraction consistently reduced picture retention while simultaneously facilitating word retention. The results were interpreted in terms of the dual coding hypothesis with modality-specific interference effects in the visual and acoustic processing systems. The differential effects of dual distraction were related to the introduction of visual interference and differential levels of functional acoustic interference across dual and single distraction tasks. The latter was supported by a constant 2/1 ratio in the backward counting rates of the acoustic vs. dual distraction tasks. The results further suggest that retention may not depend on total processing load of the distraction task, per se, but rather that processing load operates within modalities.
Fujimura, Yoshinori; Miura, Daisuke; Tachibana, Hirofumi
2017-09-27
Low-molecular-weight phytochemicals have health benefits and reduce the risk of diseases, but the mechanisms underlying their activities have remained elusive because of the lack of a methodology that can easily visualize the exact behavior of such small molecules. Recently, we developed an in situ label-free imaging technique, called mass spectrometry imaging, for visualizing spatially-resolved biotransformations based on simultaneous mapping of the major bioactive green tea polyphenol and its phase II metabolites. In addition, we established a mass spectrometry-based metabolic profiling technique capable of evaluating the bioactivities of diverse green tea extracts, which contain multiple phytochemicals, by focusing on their compositional balances. This methodology allowed us to simultaneously evaluate the relative contributions of the multiple compounds present in a multicomponent system to its bioactivity. This review highlights small molecule-sensing techniques for visualizing the complex behaviors of herbal components and linking such information to an enhanced understanding of the functionalities of multicomponent medicinal herbs.
Oscillatory frontal theta responses are increased upon bisensory stimulation.
Sakowitz, O W; Schürmann, M; Başar, E
2000-05-01
To investigate the functional correlation of oscillatory EEG components with the interaction of sensory modalities following simultaneous audio-visual stimulation. In an experimental study (15 subjects) we compared auditory evoked potentials (AEPs) and visual evoked potentials (VEPs) to bimodal evoked potentials (BEPs; simultaneous auditory and visual stimulation). BEPs were assumed to be brain responses to complex stimuli as a marker for intermodal associative functioning. Frequency domain analysis of these EPs showed marked theta-range components in response to bimodal stimulation. These theta components could not be explained by linear addition of the unimodal responses in the time domain. Considering topography the increased theta-response showed a remarkable frontality in proximity to multimodal association cortices. Referring to methodology we try to demonstrate that, even if various behavioral correlates of brain oscillations exist, common patterns can be extracted by means of a systems-theoretical approach. Serving as an example of functionally relevant brain oscillations, theta responses could be interpreted as an indicator of associative information processing.
Parametric estimation for reinforced concrete relief shelter for Aceh cases
NASA Astrophysics Data System (ADS)
Atthaillah; Saputra, Eri; Iqbal, Muhammad
2018-05-01
This paper was a work in progress (WIP) to discover a rapid parametric framework for post-disaster permanent shelter’s materials estimation. The intended shelters were reinforced concrete construction with bricks as its wall. Inevitably, in post-disaster cases, design variations were needed to help suited victims condition. It seemed impossible to satisfy a beneficiary with a satisfactory design utilizing the conventional method. This study offered a parametric framework to overcome slow construction-materials estimation issue against design variations. Further, this work integrated parametric tool, which was Grasshopper to establish algorithms that simultaneously model, visualize, calculate and write the calculated data to a spreadsheet in a real-time. Some customized Grasshopper components were created using GHPython scripting for a more optimized algorithm. The result from this study was a partial framework that successfully performed modeling, visualization, calculation and writing the calculated data simultaneously. It meant design alterations did not escalate time needed for modeling, visualization, and material estimation. Further, the future development of the parametric framework will be made open source.
Development of Visual Preference for Own- versus Other-Race Faces in Infancy
ERIC Educational Resources Information Center
Liu, Shaoying; Xiao, Wen Sara; Xiao, Naiqi G.; Quinn, Paul C.; Zhang, Yueyan; Chen, Hui; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-01-01
Previous research has shown that 3-month-olds prefer own- over other-race faces. The current study used eye-tracking methodology to examine how this visual preference develops with age beyond 3 months and how infants differentially scan between own- and other-race faces when presented simultaneously. We showed own- versus other-race face pairs to…
The effect of compression and attention allocation on speech intelligibility. II
NASA Astrophysics Data System (ADS)
Choi, Sangsook; Carrell, Thomas
2004-05-01
Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.
NASA Technical Reports Server (NTRS)
Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.
2010-01-01
We designed a gait training study that presented combinations of visual flow and support surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their scores on a collective measure of anxiety, cognition, and postural stability in a new discordant environment presented at the conclusion of training (Transfer Test). A treadmill was mounted to a motion base platform positioned 2 m behind a large visual screen. Training consisted of three walking sessions, each within a week of the previous visit, that presented four 5-minute exposures to various combinations of support surface and visual scene manipulations, all lateral sinusoids. The conditions were scene translation only, support surface translation only, simultaneous scene and support surface translations in-phase, and simultaneous scene and support surface translations 180 out-of-phase. During the Transfer Test, the trained participants received a 2-minute novel exposure. A visual sinusoidal roll perturbation, with twice the original flow rate, was superimposed on a sinusoidal support surface roll perturbation that was 90 out of phase with the scene. A high correlation existed between normalized torso translation, measured in the scene-only condition at the first visit, and a combined measure of normalized heart rate, stride frequency, and reaction time at the transfer test. Results suggest that visually dependent participants experience decreased postural stability, increased anxiety, and increased reaction times compared to their less visually dependent counterparts when negotiating novel discordant conditions.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Role of early visual cortex in trans-saccadic memory of object features.
Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas
2015-08-01
Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Arrestin 1 and Cone Arrestin 4 Have Unique Roles in Visual Function in an All-Cone Mouse Retina
Deming, Janise D.; Pak, Joseph S.; Shin, Jung-a; Brown, Bruce M.; Kim, Moon K.; Aung, Moe H.; Lee, Eun-Jin; Pardue, Machelle T.; Craft, Cheryl Mae
2015-01-01
Purpose Previous studies discovered cone phototransduction shutoff occurs normally for Arr1−/− and Arr4−/−; however, it is defective when both visual arrestins are simultaneously not expressed (Arr1−/−Arr4−/−). We investigated the roles of visual arrestins in an all-cone retina (Nrl−/−) since each arrestin has differential effects on visual function, including ARR1 for normal light adaptation, and ARR4 for normal contrast sensitivity and visual acuity. Methods We examined Nrl−/−, Nrl−/−Arr1−/−, Nrl−/−Arr4−/−, and Nrl−/−Arr1−/−Arr4−/− mice with photopic electroretinography (ERG) to assess light adaptation and retinal responses, immunoblot and immunohistochemical localization analysis to measure retinal expression levels of M- and S-opsin, and optokinetic tracking (OKT) to measure the visual acuity and contrast sensitivity. Results Study results indicated that Nrl−/− and Nrl−/−Arr4−/− mice light adapted normally, while Nrl−/−Arr1−/− and Nrl−/−Arr1−/−Arr4−/− mice did not. Photopic ERG a-wave, b-wave, and flicker amplitudes followed a general pattern in which Nrl−/−Arr4−/− amplitudes were higher than the amplitudes of Nrl−/−, while the amplitudes of Nrl−/−Arr1−/− and Nrl−/−Arr1−/−Arr4−/− were lower. All three visual arrestin knockouts had faster implicit times than Nrl−/− mice. M-opsin expression is lower when ARR1 is not expressed, while S-opsin expression is lower when ARR4 is not expressed. Although M-opsin expression is mislocalized throughout the photoreceptor cells, S-opsin is confined to the outer segments in all genotypes. Contrast sensitivity is decreased when ARR4 is not expressed, while visual acuity was normal except in Nrl−/−Arr1−/−Arr4−/−. Conclusions Based on the opposite visual phenotypes in an all-cone retina in the Nrl−/−Arr1−/− and Nrl−/−Arr4−/− mice, we conclude that ARR1 and ARR4 perform unique modulatory roles in cone photoreceptors. PMID:26624493
Pietersen, Alexander N.J.; Cheong, Soon Keen; Munn, Brandon; Gong, Pulin; Solomon, Samuel G.
2017-01-01
Key points How parallel are the primate visual pathways? In the present study, we demonstrate that parallel visual pathways in the dorsal lateral geniculate nucleus (LGN) show distinct patterns of interaction with rhythmic activity in the primary visual cortex (V1).In the V1 of anaesthetized marmosets, the EEG frequency spectrum undergoes transient changes that are characterized by fluctuations in delta‐band EEG power.We show that, on multisecond timescales, spiking activity in an evolutionary primitive (koniocellular) LGN pathway is specifically linked to these slow EEG spectrum changes. By contrast, on subsecond (delta frequency) timescales, cortical oscillations can entrain spiking activity throughout the entire LGN.Our results are consistent with the hypothesis that, in waking animals, the koniocellular pathway selectively participates in brain circuits controlling vigilance and attention. Abstract The major afferent cortical pathway in the visual system passes through the dorsal lateral geniculate nucleus (LGN), where nerve signals originating in the eye can first interact with brain circuits regulating visual processing, vigilance and attention. In the present study, we investigated how ongoing and visually driven activity in magnocellular (M), parvocellular (P) and koniocellular (K) layers of the LGN are related to cortical state. We recorded extracellular spiking activity in the LGN simultaneously with local field potentials (LFP) in primary visual cortex, in sufentanil‐anaesthetized marmoset monkeys. We found that asynchronous cortical states (marked by low power in delta‐band LFPs) are linked to high spike rates in K cells (but not P cells or M cells), on multisecond timescales. Cortical asynchrony precedes the increases in K cell spike rates by 1–3 s, implying causality. At subsecond timescales, the spiking activity in many cells of all (M, P and K) classes is phase‐locked to delta waves in the cortical LFP, and more cells are phase‐locked during synchronous cortical states than during asynchronous cortical states. The switch from low‐to‐high spike rates in K cells does not degrade their visual signalling capacity. By contrast, during asynchronous cortical states, the fidelity of visual signals transmitted by K cells is improved, probably because K cell responses become less rectified. Overall, the data show that slow fluctuations in cortical state are selectively linked to K pathway spiking activity, whereas delta‐frequency cortical oscillations entrain spiking activity throughout the entire LGN, in anaesthetized marmosets. PMID:28116750
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
3D visualization of additive occlusion and tunable full-spectrum fluorescence in calcite
Green, David C.; Ihli, Johannes; Thornton, Paul D.; Holden, Mark A.; Marzec, Bartosz; Kim, Yi-Yeoun; Kulak, Alex N.; Levenstein, Mark A.; Tang, Chiu; Lynch, Christophe; Webb, Stephen E. D.; Tynan, Christopher J.; Meldrum, Fiona C.
2016-01-01
From biomineralization to synthesis, organic additives provide an effective means of controlling crystallization processes. There is growing evidence that these additives are often occluded within the crystal lattice. This promises an elegant means of creating nanocomposites and tuning physical properties. Here we use the incorporation of sulfonated fluorescent dyes to gain new understanding of additive occlusion in calcite (CaCO3), and to link morphological changes to occlusion mechanisms. We demonstrate that these additives are incorporated within specific zones, as defined by the growth conditions, and show how occlusion can govern changes in crystal shape. Fluorescence spectroscopy and lifetime imaging microscopy also show that the dyes experience unique local environments within different zones. Our strategy is then extended to simultaneously incorporate mixtures of dyes, whose fluorescence cascade creates calcite nanoparticles that fluoresce white. This offers a simple strategy for generating biocompatible and stable fluorescent nanoparticles whose output can be tuned as required. PMID:27857076
NASA Technical Reports Server (NTRS)
Lourenco, L. M. M.; Krothapalli, A.
1987-01-01
One of the difficult problems in experimental fluid dynamics remains the determination of the vorticity field in fluid flows. Recently, a novel velocity measurement technique, commonly known as Laser Speckle or Particle Image Displacement Velocimetry became available. This technique permits the simultaneous visualization of the 2 dimensional streamline pattern in unsteady flows and the quantification of the velocity field. The main advantage of this new technique is that the whole 2 dimensional velocity field can be recorded with great accuracy and spatial resolution, from which the instantaneous vorticity field can be easily obtained. A apparatus used for taking particle displacement images is described. Local coherent illumination by the probe laser beam yielded Young's fringes of good quality at almost every location of the flow field. These fringes were analyzed and the velocity and vorticity fields were derived. Several conclusions drawn are discussed.
Exciton-controlled fluorescence: application to hybridization-sensitive fluorescent DNA probe.
Okamoto, Akimitsu; Ikeda, Shuji; Kubota, Takeshi; Yuki, Mizue; Yanagisawa, Hiroyuki
2009-01-01
A hybridization-sensitive fluorescent probe has been designed for nucleic acid detection, using the concept of fluorescence quenching caused by the intramolecular excitonic interaction of fluorescence dyes. We synthesized a doubly thiazole orange-labeled nucleotide showing high fluorescence intensity for a hybrid with the target nucleic acid and effective quenching for the single-stranded state. This exciton-controlled fluorescent probe was applied to living HeLa cells using microinjection to visualize intracellular mRNA localization. Immediately after injection of the probe into the cell, fluorescence was observed from the probe hybridizing with the target RNA. This fluorescence rapidly decreased upon addition of a competitor DNA. Multicoloring of this probe resulted in the simple simultaneous detection of plural target nucleic acid sequences. This probe realized a large, rapid, reversible change in fluorescence intensity in sensitive response to the amount of target nucleic acid, and facilitated spatiotemporal monitoring of the behavior of intracellular RNA.
Zooming In on Plant Hormone Analysis: Tissue- and Cell-Specific Approaches.
Novák, Ondřej; Napier, Richard; Ljung, Karin
2017-04-28
Plant hormones are a group of naturally occurring, low-abundance organic compounds that influence physiological processes in plants. Our knowledge of the distribution profiles of phytohormones in plant organs, tissues, and cells is still incomplete, but advances in mass spectrometry have enabled significant progress in tissue- and cell-type-specific analyses of phytohormones over the last decade. Mass spectrometry is able to simultaneously identify and quantify hormones and their related substances. Biosensors, on the other hand, offer continuous monitoring; can visualize local distributions and real-time quantification; and, in the case of genetically encoded biosensors, are noninvasive. Thus, biosensors offer additional, complementary technologies for determining temporal and spatial changes in phytohormone concentrations. In this review, we focus on recent advances in mass spectrometry-based quantification, describe monitoring systems based on biosensors, and discuss validations of the various methods before looking ahead at future developments for both approaches.
Algorithmic Approaches for Place Recognition in Featureless, Walled Environments
2015-01-01
inertial measurement unit LIDAR light detection and ranging RANSAC random sample consensus SLAM simultaneous localization and mapping SUSAN smallest...algorithm 38 21 Typical input image for general junction based algorithm 39 22 Short exposure image of hallway junction taken by LIDAR 40 23...discipline of simultaneous localization and mapping ( SLAM ) has been studied intensively over the past several years. Many technical approaches
Effects of total sleep deprivation on divided attention performance
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21–30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss. PMID:29166387
Effects of total sleep deprivation on divided attention performance.
Chua, Eric Chern-Pin; Fang, Eric; Gooley, Joshua J
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21-30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss.
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Aiello, Marilena; Merola, Sheila; Lasaponara, Stefano; Pinto, Mario; Tomaiuolo, Francesco; Doricchi, Fabrizio
2018-01-31
The possibility of allocating attentional resources to the "global" shape or to the "local" details of pictorial stimuli helps visual processing. Investigations with hierarchical Navon letters, that are large "global" letters made up of small "local" ones, consistently demonstrate a right hemisphere advantage for global processing and a left hemisphere advantage for local processing. Here we investigated how the visual and phonological features of the global and local components of Navon letters influence these hemispheric advantages. In a first study in healthy participants, we contrasted the hemispheric processing of hierarchical letters with global and local items competing for response selection, to the processing of hierarchical letters in which a letter, a false-letter conveying no phonological information or a geometrical shape presented at the unattended level did not compete for response selection. In a second study, we investigated the hemispheric processing of hierarchical stimuli in which global and local letters were both visually and phonologically congruent (e.g. large uppercase G made of smaller uppercase G), visually incongruent and phonologically congruent (e.g. large uppercase G made of small lowercase g) or visually incongruent and phonologically incongruent (e.g. large uppercase G made of small lowercase or uppercase M). In a third study, we administered the same tasks to a right brain damaged patient with a lesion involving pre-striate areas engaged by global processing. The results of the first two experiments showed that the global abilities of the left hemisphere are limited because of its strong susceptibility to interference from local letters even when these are irrelevant to the task. Phonological features played a crucial role in this interference because the interference was entirely maintained also when letters at the global and local level were presented in different uppercase vs. lowercase formats. In contrast, when local features conveyed no phonological information, the left hemisphere showed preserved global processing abilities. These findings were supported by the study of the right brain damaged patient. These results offer a new look at the hemispheric dominance in the attentional processing of the global and local levels of hierarchical stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
Cook, Stephanie; Kokmotou, Katerina; Soto, Vicente; Wright, Hazel; Fallon, Nicholas; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej
2018-04-13
Odours alter evaluations of concurrently presented visual stimuli, such as faces. Stimulus onset asynchrony (SOA) is known to affect evaluative priming in various sensory modalities. However, effects of SOA on odour priming of visual stimuli are not known. The present study aimed to analyse whether subjective and cortical activation changes during odour priming would vary as a function of SOA between odours and faces. Twenty-eight participants rated faces under pleasant, unpleasant, and no-odour conditions using visual analogue scales. In half of trials, faces appeared one-second after odour offset (SOA 1). In the other half of trials, faces appeared during the odour pulse (SOA 2). EEG was recorded continuously using a 128-channel system, and event-related potentials (ERPs) to face stimuli were evaluated using statistical parametric mapping (SPM). Faces presented during unpleasant-odour stimulation were rated significantly less pleasant than the same faces presented one-second after offset of the unpleasant odour. Scalp-time clusters in the late-positive-potential (LPP) time-range showed an interaction between odour and SOA effects, whereby activation was stronger for faces presented simultaneously with the unpleasant odour, compared to the same faces presented after odour offset. Our results highlight stronger unpleasant odour priming with simultaneous, compared to delayed, odour-face presentation. Such effects were represented in both behavioural and neural data. A greater cortical and subjective response during simultaneous presentation of faces and unpleasant odour may have an adaptive role, allowing for a prompt and focused behavioural reaction to a concurrent stimulus if an aversive odour would signal danger, or unwanted social interaction. Copyright © 2018 Elsevier B.V. All rights reserved.
Audiovisual Temporal Processing and Synchrony Perception in the Rat.
Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L
2016-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Audiovisual Temporal Processing and Synchrony Perception in the Rat
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
2017-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580
Hybrid Vision-Fusion system for whole-body scintigraphy.
Barjaktarović, Marko; Janković, Milica M; Jeremić, Marija; Matović, Milovan
2018-05-01
Radioiodine therapy in the treatment of differentiated thyroid carcinoma (DTC) is used in clinical practice for the ablation of thyroid residues and/or destruction of tumour tissue. Whole-body scintigraphy for visualization of the spatial 131I distribution performed by a gamma camera (GC) is a standard procedure in DTC patients after application of radioiodine therapy. A common problem is the precise topographic localization of regions where radioiodine is accumulated even in SPECT imaging. SPECT/CT can provide precise topographic localization of regions where radioiodine is accumulated, but it is often unavailable, especially in developing countries because of the high price of the equipment. In this paper, we present a Vision-Fusion system as an affordable solution for 1) acquiring an optical whole-body image during routine whole-body scintigraphy and 2) fusing gamma and optical images (also available for the auto-contour mode of GC). The estimated prediction error for image registration is 1.84 mm. The validity of fusing was tested by performing simultaneous optical and scintigraphy image acquisition of the bar phantom. The fusion result shows that the fusing process has a slight influence and is lower than the spatial resolution of GC (mean value ± standard deviation: 1.24 ± 0.22 mm). The Vision-Fusion system was used for radioiodine post-therapeutic treatment, and 17 patients were followed (11 women and 6 men, with an average age of 48.18 ± 13.27 years). Visual inspection showed no misregistration. Based on our first clinical experience, we noticed that the Vision-Fusion system could be very useful for improving the diagnostic possibility of whole-body scintigraphy after radioiodine therapy. Additionally, the proposed Vision-Fusion software can be used as an upgrade for any GC to improve localizations of thyroid/tumour tissue. Copyright © 2018 Elsevier Ltd. All rights reserved.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
6D Visualization of Multidimensional Data by Means of Cognitive Technology
NASA Astrophysics Data System (ADS)
Vitkovskiy, V.; Gorohov, V.; Komarinskiy, S.
2010-12-01
On the basis of the cognitive graphics concept, we worked out the SW-system for visualization and analysis. It allows to train and to aggravate intuition of researcher, to raise his interest and motivation to the creative, scientific cognition, to realize process of dialogue with the very problems simultaneously. The Space Hedgehog system is the next step in the cognitive means of the multidimensional data analyze. The technique and technology cognitive 6D visualization of the multidimensional data is developed on the basis of the cognitive visualization research and technology development. The Space Hedgehog system allows direct dynamic visualization of 6D objects. It is developed with use of experience of the program Space Walker creation and its applications.
Mobile device geo-localization and object visualization in sensor networks
NASA Astrophysics Data System (ADS)
Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael
2014-10-01
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim. To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods. 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results. All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions. WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure. PMID:28127465
Abou Samra, Waleed Ali; El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim . To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods . 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results . All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions . WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure.
NASA Technical Reports Server (NTRS)
Johnson, Marcus; Jung, Jaewoo; Rios, Joseph; Mercer, Joey; Homola, Jeffrey; Prevot, Thomas; Mulfinger, Daniel; Kopardekar, Parimal
2017-01-01
This study evaluates a traffic management concept designed to enable simultaneous operations of multiple small unmanned aircraft systems (UAS) in the national airspace system (NAS). A five-day flight-test activity is described that examined the feasibility of operating multiple UAS beyond visual line of sight (BVLOS) of their respective operators in the same airspace. Over the five-day campaign, three groups of five flight crews operated a total of eleven different aircraft. Each group participated in four flight scenarios involving five simultaneous missions. Each vehicle was operated BVLOS up to 1.5 miles from the pilot in command. Findings and recommendations are presented to support the feasibility and safety of routine BVLOS operations for small UAS.
NASA Technical Reports Server (NTRS)
Johnson, Marcus; Jung, Jaewoo; Rios, Joseph; Mercer, Joey; Homola, Jeffrey; Prevot, Thomas; Mulfinger, Daniel; Kopardekar, Parimal
2017-01-01
This study evaluates a traffic management concept designed to enable simultaneous operations of multiple small unmanned aircraft systems (UAS) in the national airspace system (NAS). A five-day flight-test activity is described that examined the feasibility of operating multiple UAS beyond visual line of sight (BVLOS) of their respective operators in the same airspace. Over the five-day campaign, three groups of five flight crews operated a total of eleven different aircraft. Each group participated in four flight scenarios involving five simultaneous missions. Each vehicle was operated BVLOS up to 1.5 miles from the pilot in command. Findings and recommendations are presented to support the feasibility and safety of routine BVLOS operations for small UAS.
Russo, N; Mottron, L; Burack, J A; Jemel, B
2012-07-01
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rao, Veena S; Christenbury, Joseph; Lee, Paul; Allingham, Rand; Herndon, Leon; Challa, Pratap
2017-02-01
To evaluate efficacy and safety of a novel technique, simultaneous implantation of Ahmed and Baerveldt shunts, for improved control of intraocular pressure (IOP) in advanced glaucoma with visual field defects threatening central fixation. Retrospective case series; all patients receiving simultaneous Ahmed and Baerveldt implantation at a single institution between October 2004 and October 2009 were included. Records were reviewed preoperatively and at postoperative day 1, week 1, month 1, month 3, month 6, year 1, and yearly until year 5. Outcome measures included IOP, best-corrected visual acuity, visual field mean deviation, cup to disc ratio, number of glaucoma medications, and complications. Fifty-nine eyes were identified; mean (±SD) follow-up was 26±23 months. Primary open-angle glaucoma was most common (n=37, 63%). Forty-six eyes (78%) had prior incisional surgery. Mean preoperative IOP was 25.5±9.8 mm Hg. IOP was reduced 50% day 1 (P<0.001, mean 12.7±7.0 mm Hg), which persisted throughout follow-up. At year 1, cup to disc ratio and mean deviation were stable with decreased best-corrected visual acuity from logMAR 0.72±0.72(20/100) to 1.06±1.13(20/200) (P=0.007). The Kaplan-Meier survival analysis showed median and mean survival of 1205 and 829±91 days, respectively. Complication rate was 47%. IOP is markedly reduced postoperative day 1 following double glaucoma tube implantation with effects persisting over postoperative year 1 and up to year 5. Complications were higher than that seen in reports of single shunt implantation, which may be explained by patient complexity in this cohort. This technique may prove a promising novel approach for management of uncontrolled IOP in advanced glaucoma.
Moodley, Kuven K; Perani, Daniela; Minati, Ludovico; Della Rosa, Pasquale Anthony; Pennycook, Frank; Dickson, John C; Barnes, Anna; Contarino, Valeria Elisa; Michopoulou, Sofia; D'Incerti, Ludovico; Good, Catriona; Fallanca, Federico; Vanoli, Emilia Giovanna; Ell, Peter J; Chan, Dennis
2015-01-01
Simultaneous PET-MRI is used to compare patterns of cerebral hypometabolism and atrophy in six different dementia syndromes. The primary objective was to conduct an initial exploratory study regarding the concordance of atrophy and hypometabolism in syndromic variants of Alzheimer's disease (AD) and frontotemporal dementia (FTD). The secondary objective was to determine the effect of image analysis methods on determination of atrophy and hypometabolism. PET and MRI data were acquired simultaneously on 24 subjects with six variants of AD and FTD (n = 4 per group). Atrophy was rated visually and also quantified with measures of cortical thickness. Hypometabolism was rated visually and also quantified using atlas- and SPM-based approaches. Concordance was measured using weighted Cohen's kappa. Atrophy-hypometabolism concordance differed markedly between patient groups; kappa scores ranged from 0.13 (nonfluent/agrammatic variant of primary progressive aphasia, nfvPPA) to 0.49 (posterior cortical variant of AD, PCA). Heterogeneity was also observed within groups; the confidence intervals of kappa scores ranging from 0-0.25 for PCA to 0.29-0.61 for nfvPPA. More widespread MRI and PET changes were identified using quantitative methods than on visual rating. The marked differences in concordance identified in this initial study may reflect differences in the molecular pathologies underlying AD and FTD syndromic variants but also operational differences in the methods used to diagnose these syndromes. The superior ability of quantitative methodologies to detect changes on PET and MRI, if confirmed on larger cohorts, may favor their usage over qualitative visual inspection in future clinical diagnostic practice.
Yang, Zhiyong; Heeger, David J.; Blake, Randolph
2014-01-01
Traveling waves of cortical activity, in which local stimulation triggers lateral spread of activity to distal locations, have been hypothesized to play an important role in cortical function. However, there is conflicting physiological evidence for the existence of spreading traveling waves of neural activity triggered locally. Dichoptic stimulation, in which the two eyes view dissimilar monocular patterns, can lead to dynamic wave-like fluctuations in visual perception and therefore, provides a promising means for identifying and studying cortical traveling waves. Here, we used voltage-sensitive dye imaging to test for the existence of traveling waves of activity in the primary visual cortex of awake, fixating monkeys viewing dichoptic stimuli. We find clear traveling waves that are initiated by brief, localized contrast increments in one of the monocular patterns and then, propagate at speeds of ∼30 mm/s. These results demonstrate that under an appropriate visual context, circuitry in visual cortex in alert animals is capable of supporting long-range traveling waves triggered by local stimulation. PMID:25343785
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Doesburg, Sam M; Herdman, Anthony T; Ribary, Urs; Cheung, Teresa; Moiseev, Alexander; Weinberg, Hal; Liotti, Mario; Weeks, Daniel; Grunau, Ruth E
2010-04-01
Local alpha-band synchronization has been associated with both cortical idling and active inhibition. Recent evidence, however, suggests that long-range alpha synchronization increases functional coupling between cortical regions. We demonstrate increased long-range alpha and beta band phase synchronization during short-term memory retention in children 6-10 years of age. Furthermore, whereas alpha-band synchronization between posterior cortex and other regions is increased during retention, local alpha-band synchronization over posterior cortex is reduced. This constitutes a functional dissociation for alpha synchronization across local and long-range cortical scales. We interpret long-range synchronization as reflecting functional integration within a network of frontal and visual cortical regions. Local desynchronization of alpha rhythms over posterior cortex, conversely, likely arises because of increased engagement of visual cortex during retention.
Bayesian random local clocks, or one rate to rule them all
2010-01-01
Background Relaxed molecular clock models allow divergence time dating and "relaxed phylogenetic" inference, in which a time tree is estimated in the face of unequal rates across lineages. We present a new method for relaxing the assumption of a strict molecular clock using Markov chain Monte Carlo to implement Bayesian modeling averaging over random local molecular clocks. The new method approaches the problem of rate variation among lineages by proposing a series of local molecular clocks, each extending over a subregion of the full phylogeny. Each branch in a phylogeny (subtending a clade) is a possible location for a change of rate from one local clock to a new one. Thus, including both the global molecular clock and the unconstrained model results, there are a total of 22n-2 possible rate models available for averaging with 1, 2, ..., 2n - 2 different rate categories. Results We propose an efficient method to sample this model space while simultaneously estimating the phylogeny. The new method conveniently allows a direct test of the strict molecular clock, in which one rate rules them all, against a large array of alternative local molecular clock models. We illustrate the method's utility on three example data sets involving mammal, primate and influenza evolution. Finally, we explore methods to visualize the complex posterior distribution that results from inference under such models. Conclusions The examples suggest that large sequence datasets may only require a small number of local molecular clocks to reconcile their branch lengths with a time scale. All of the analyses described here are implemented in the open access software package BEAST 1.5.4 (http://beast-mcmc.googlecode.com/). PMID:20807414
Magnetoencephalography recording and analysis.
Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy
2014-03-01
Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Cognitive Mapping Based on Conjunctive Representations of Space and Movement
Zeng, Taiping; Si, Bailu
2017-01-01
It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments. PMID:29213234
Customizing a rangefinder for community-based wildlife conservation initiatives
Ransom, Jason I.
2011-01-01
Population size of many threatened and endangered species is relatively unknown because estimating animal abundance in remote parts of the world, without access to aircraft for surveying vast areas, is a scientific challenge with few proposed solutions. One option is to enlist local community members and train them in data collection for large line transect or point count surveys, but financial and sometimes technological constraints prevent access to the necessary equipment and training for accurately quantifying distance measurements. Such measurements are paramount for generating reliable estimates of animal density. This problem was overcome in a survey of Asiatic wild ass (Equus hemionus) in the Great Gobi B Strictly Protected Area, Mongolia, by converting an inexpensive optical sporting rangefinder into a species-specific rangefinder with visual-based categorical labels. Accuracy trials concluded 96.86% of 350 distance measures matched those from a laser rangefinder. This simple customized optic subsequently allowed for a large group of minimally-trained observers to simultaneously record quantitative measures of distance, despite language, education, and skill differences among the diverse group. The large community-based effort actively engaged local residents in species conservation by including them as the foundation for collecting scientific data.
Zhu, Tianyu; de Silva, Piotr; Van Voorhis, Troy
2018-01-09
Chemical bonding plays a central role in the description and understanding of chemistry. Many methods have been proposed to extract information about bonding from quantum chemical calculations, the majority of them resorting to molecular orbitals as basic descriptors. Here, we present a method called self-attractive Hartree (SAH) decomposition to unravel pairs of electrons directly from the electron density, which unlike molecular orbitals is a well-defined observable that can be accessed experimentally. The key idea is to partition the density into a sum of one-electron fragments that simultaneously maximize the self-repulsion and maintain regular shapes. This leads to a set of rather unusual equations in which every electron experiences self-attractive Hartree potential in addition to an external potential common for all the electrons. The resulting symmetry breaking and localization are surprisingly consistent with chemical intuition. SAH decomposition is also shown to be effective in visualization of single/multiple bonds, lone pairs, and unusual bonds due to the smooth nature of fragment densities. Furthermore, we demonstrate that it can be used to identify specific chemical bonds in molecular complexes and provides a simple and accurate electrostatic model of hydrogen bonding.
Wang, Ningshan; Gibbons, Christopher H.; Freeman, Roy
2011-01-01
Confocal imaging uses immunohistochemical binding of specific antibodies to visualize tissues, but technical obstacles limit more widespread use of this technique in the imaging of peripheral nerve tissue. These obstacles include same-species antibody cross-reactivity and weak fluorescent signals of individual and co-localized antigens. The aims of this study were to develop new immunohistochemical techniques for imaging of peripheral nerve fibers. Three-millimeter punch skin biopsies of healthy individuals were fixed, frozen, and cut into 50-µm sections. Tissues were stained with a variety of antibody combinations with two signal amplification systems, streptavidin-biotin-fluorochrome (sABC) and tyramide-horseradish peroxidase-fluorochrome (TSA), used simultaneously to augment immunohistochemical signals. The combination of the TSA and sABC amplification systems provided the first successful co-localization of sympathetic adrenergic and sympathetic cholinergic nerve fibers in cutaneous human sweat glands and vasomotor and pilomotor systems. Primary antibodies from the same species were amplified individually without cross-reactivity or elevated background interference. The confocal fluorescent signal-to-noise ratio increased, and image clarity improved. These modifications to signal amplification systems have the potential for widespread use in the study of human neural tissues. PMID:21411809
Fiber-optic control and thermometry of single-cell thermosensation logic.
Fedotov, I V; Safronov, N A; Ermakova, Yu G; Matlashov, M E; Sidorov-Biryukov, D A; Fedotov, A B; Belousov, V V; Zheltikov, A M
2015-11-13
Thermal activation of transient receptor potential (TRP) cation channels is one of the most striking examples of temperature-controlled processes in cell biology. As the evidence indicating the fundamental role of such processes in thermosensation builds at a fast pace, adequately accurate tools that would allow heat receptor logic behind thermosensation to be examined on a single-cell level are in great demand. Here, we demonstrate a specifically designed fiber-optic probe that enables thermal activation with simultaneous online thermometry of individual cells expressing genetically encoded TRP channels. This probe integrates a fiber-optic tract for the delivery of laser light with a two-wire microwave transmission line. A diamond microcrystal fixed on the fiber tip is heated by laser radiation transmitted through the fiber, providing a local heating of a cell culture, enabling a well-controlled TRP-assisted thermal activation of cells. Online local temperature measurements are performed by using the temperature-dependent frequency shift of optically detected magnetic resonance, induced by coupling the microwave field, delivered by the microwave transmission line, to nitrogen--vacancy centers in the diamond microcrystal. Activation of TRP channels is verified by using genetically encoded fluorescence indicators, visualizing an increase in the calcium flow through activated TRP channels.
NASA Astrophysics Data System (ADS)
Hue, V.; Roth, L.; Grodent, D. C.; Gladstone, R.; Saur, J.; Bonfond, B.
2017-12-01
The interaction of the co-rotating magnetospheric plasma with Jupiter's Galilean moons generates local perturbations and auroral emissions in the moons' tenuous atmospheres. Alfvén waves are launched by this local interaction and travel along Jupiter's field lines triggering various effects that finally lead to the auroral moon footprints far away in Jupiter's polar regions. Within the large Hubble Space Telescope aurora program in support of the NASA Juno mission (HST GO-14634, PI D. Grodent), HST observed the local aurora at the moons Io and Ganymede on three occasions in 2017 while the Juno Ultraviolet Spectrograph simultaneously observed Jupiter's aurora and the moon footprints. In this presentation, we will provide first results from the first-ever simultaneous moon and footprint observations for the case of Io. We compare the temporal variability of the local moon aurora and the Io footprint, addressing the question how much of the footprint variability originates from changes at the moon source and how much originates from processes in the regions that lie in between the moon and Jupiter's poles.
Localization Performance of Multiple Vibrotactile Cues on Both Arms.
Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru
2018-01-01
To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Locality and simultaneous elements of reality
NASA Astrophysics Data System (ADS)
Nisticò, G.; Sestito, A.
2012-12-01
We show that the extension of quantum correlations stemming from a "strict" interpretation of the criterion of reality raises the failure of Hardy's non-locality theorem. Then, by suggesting an ideal experiment, we prove that such an extension, though strictly smaller than the one derived by Einstein, Podolsky and Rosen and usually adopted, allows for the assignment of simultaneous objective values of two non-commuting observables.
2018-01-01
Medium Access Control (MAC) delay which occurs between the anchor node’s transmissions is one of the error sources in underwater localization. In particular, in AUV localization, the MAC delay significantly degrades the ranging accuracy. The Cramer-Rao Low Bound (CRLB) definition theoretically proves that the MAC delay significantly degrades the localization performance. This paper proposes underwater localization combined with multiple access technology to decouple the localization performance from the MAC delay. Towards this goal, we adopt hyperbolic frequency modulation (HFM) signal that provides multiplexing based on its good property, high-temporal correlation. Owing to the multiplexing ability of the HFM signal, the anchor nodes can transmit packets without MAC delay, i.e., simultaneous transmission is possible. In addition, the simulation results show that the simultaneous transmission is not an optional communication scheme, but essential for the localization of mobile object in underwater. PMID:29373518
Kim, Sungryul; Yoo, Younghwan
2018-01-26
Medium Access Control (MAC) delay which occurs between the anchor node's transmissions is one of the error sources in underwater localization. In particular, in AUV localization, the MAC delay significantly degrades the ranging accuracy. The Cramer-Rao Low Bound (CRLB) definition theoretically proves that the MAC delay significantly degrades the localization performance. This paper proposes underwater localization combined with multiple access technology to decouple the localization performance from the MAC delay. Towards this goal, we adopt hyperbolic frequency modulation (HFM) signal that provides multiplexing based on its good property, high-temporal correlation. Owing to the multiplexing ability of the HFM signal, the anchor nodes can transmit packets without MAC delay, i.e., simultaneous transmission is possible. In addition, the simulation results show that the simultaneous transmission is not an optional communication scheme, but essential for the localization of mobile object in underwater.
VisGets: coordinated visualizations for web-based information exploration and discovery.
Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey
2008-01-01
In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.
Visual perceptual load induces inattentional deafness.
Macdonald, James S P; Lavie, Nilli
2011-08-01
In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.
Cortical visual dysfunction in children: a clinical study.
Dutton, G; Ballantyne, J; Boyd, G; Bradnam, M; Day, R; McCulloch, D; Mackie, R; Phillips, S; Saunders, K
1996-01-01
Damage to the cerebral cortex was responsible for impairment in vision in 90 of 130 consecutive children referred to the Vision Assessment Clinic in Glasgow. Cortical blindness was seen in 16 children. Only 2 were mobile, but both showed evidence of navigational blind-sight. Cortical visual impairment, in which it was possible to estimate visual acuity but generalised severe brain damage precluded estimation of cognitive visual function, was observed in 9 children. Complex disorders of cognitive vision were seen in 20 children. These could be divided into five categories and involved impairment of: (1) recognition, (2) orientation, (3) depth perception, (4) perception of movement and (5) simultaneous perception. These disorders were observed in a variety of combinations. The remaining children showed evidence of reduced visual acuity and/ or visual field loss, but without detectable disorders of congnitive visual function. Early recognition of disorders of cognitive vision is required if active training and remediation are to be implemented.
Hiwatashi, Akio; Yoshiura, Takashi; Yamashita, Koji; Kamano, Hironori; Honda, Hiroshi
2012-09-01
Preoperative evaluation of small vessels without contrast material is sometimes difficult in patients with neurovascular compression disease. The purpose of this retrospective study was to evaluate whether 3D STIR MRI could simultaneously depict the lower cranial nerves--fifth through twelfth--and the blood vessels in the posterior fossa. The posterior fossae of 47 adults (26 women, 21 men) without gross pathologic changes were imaged with 3D STIR and turbo spin-echo heavily T2-weighted MRI sequences and with contrast-enhanced turbo field-echo MR angiography (MRA). Visualization of the cranial nerves on STIR images was graded on a 4-point scale and compared with visualization on T2-weighted images. Visualization of the arteries on STIR images was evaluated according to the segments in each artery and compared with that on MRA images. Visualization of the veins on STIR images was also compared with that on MRA images. Statistical analysis was performed with the Mann-Whitney U test. There were no significant differences between STIR and T2-weighted images with respect to visualization of the cranial nerves (p > 0.05). Identified on STIR and MRA images were 94 superior cerebellar arteries, 81 anteroinferior cerebellar arteries, and 79 posteroinferior cerebellar arteries. All veins evaluated were seen on STIR and MRA images. There were no significant differences between STIR and MRA images with respect to visualization of arteries and veins (p > 0.05). High-resolution STIR is a feasible method for simultaneous evaluation of the lower cranial nerves and the vessels in the posterior fossa without the use of contrast material.
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Group cohesion in foraging meerkats: follow the moving 'vocal hot spot'.
Gall, Gabriella E C; Manser, Marta B
2017-04-01
Group coordination, when 'on the move' or when visibility is low, is a challenge faced by many social living animals. While some animals manage to maintain cohesion solely through visual contact, the mechanism of group cohesion through other modes of communication, a necessity when visual contact is reduced, is not yet understood. Meerkats ( Suricata suricatta ), a small, social carnivore, forage as a cohesive group while moving continuously. While foraging, they frequently emit 'close calls', soft close-range contact calls. Variations in their call rates based on their local environment, coupled with individual movement, produce a dynamic acoustic landscape with a moving 'vocal hotspot' of the highest calling activity. We investigated whether meerkats follow such a vocal hotspot by playing back close calls of multiple individuals to foraging meerkats from the front and back edge of the group simultaneously. These two artificially induced vocal hotspots caused the group to spatially elongate and split into two subgroups. We conclude that meerkats use the emergent dynamic call pattern of the group to adjust their movement direction and maintain cohesion. Our study describes a highly flexible mechanism for the maintenance of group cohesion through vocal communication, for mobile species in habitats with low visibility and where movement decisions need to be adjusted continuously to changing environmental conditions.
NASA Astrophysics Data System (ADS)
McClatchy, D. M.; Rizzo, E. J.; Krishnaswamy, V.; Kanick, S. C.; Wells, W. A.; Paulsen, K. D.; Pogue, B. W.
2017-02-01
There is a dire clinical need for surgical margin guidance in breast conserving therapy (BCT). We present a multispectral spatial frequency domain imaging (SFDI) system, spanning the visible and near-infrared (NIR) wavelengths, combined with a shielded X-ray computed tomography (CT) system, designed for intraoperative breast tumor margin assessment. While the CT can provide a volumetric visualization of the tumor core and its spiculations, the co-registered SFDI can provide superficial and quantitative information about localized changes tissue morphology from light scattering parameters. These light scattering parameters include both model-based parameters of sub-diffusive light scattering related to the particle size scale distribution and also textural information of the high spatial frequency reflectance. Because the SFDI and CT components are rigidly fixed, a simple transformation can be used to simultaneously display the SFDI and CT data in the same coordinate system. This is achieved through the Visualization Toolkit (vtk) file format in the open-source Slicer medical imaging software package. In this manuscript, the instrumentation, data processing, and preliminary human specimen data will be presented. The ultimate goal of this work is to evaluate this technology in a prospective clinical trial, and the current limitations and engineering solutions to meet this goal will also be discussed.
Visualizing multiple inter-organelle contact sites using the organelle-targeted split-GFP system.
Kakimoto, Yuriko; Tashiro, Shinya; Kojima, Rieko; Morozumi, Yuki; Endo, Toshiya; Tamura, Yasushi
2018-04-18
Functional integrity of eukaryotic organelles relies on direct physical contacts between distinct organelles. However, the entity of organelle-tethering factors is not well understood due to lack of means to analyze inter-organelle interactions in living cells. Here we evaluate the split-GFP system for visualizing organelle contact sites in vivo and show its advantages and disadvantages. We observed punctate GFP signals from the split-GFP fragments targeted to any pairs of organelles among the ER, mitochondria, peroxisomes, vacuole and lipid droplets in yeast cells, which suggests that these organelles form contact sites with multiple organelles simultaneously although it is difficult to rule out the possibilities that these organelle contacts sites are artificially formed by the irreversible associations of the split-GFP probes. Importantly, split-GFP signals in the overlapped regions of the ER and mitochondria were mainly co-localized with ERMES, an authentic ER-mitochondria tethering structure, suggesting that split-GFP assembly depends on the preexisting inter-organelle contact sites. We also confirmed that the split-GFP system can be applied to detection of the ER-mitochondria contact sites in HeLa cells. We thus propose that the split-GFP system is a potential tool to observe and analyze inter-organelle contact sites in living yeast and mammalian cells.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Tracking the visual focus of attention for a varying number of wandering people.
Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel
2008-07-01
We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.
Joint image restoration and location in visual navigation system
NASA Astrophysics Data System (ADS)
Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie
2018-02-01
Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.
SLAMM: Visual monocular SLAM with continuous mapping using multiple maps
Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed
2018-01-01
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697
Alphabetic letter identification: Effects of perceivability, similarity, and bias☆
Mueller, Shane T.; Weidemann, Christoph T.
2012-01-01
The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification. PMID:22036587
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Local protein dynamics during microvesicle exocytosis in neuroendocrine cells.
Somasundaram, Agila; Taraska, Justin
2018-06-06
Calcium triggered exocytosis is key to many physiological processes, including neurotransmitter and hormone release by neurons and endocrine cells. Dozens of proteins regulate exocytosis, yet the temporal and spatial dynamics of these factors during vesicle fusion remain unclear. Here we use total internal reflection fluorescence microscopy to visualize local protein dynamics at single sites of exocytosis of small synaptic-like microvesicles in live cultured neuroendocrine PC12 cells. We employ two-color imaging to simultaneously observe membrane fusion (using vesicular acetylcholine transporter (VAChT) tagged to pHluorin) and the dynamics of associated proteins at the moments surrounding exocytosis. Our experiments show that many proteins, including the SNAREs syntaxin1 and VAMP2, the SNARE modulator tomosyn, and Rab proteins, are pre-clustered at fusion sites and rapidly lost at fusion. The ATPase NSF is locally recruited at fusion. Interestingly, the endocytic BAR domain-containing proteins amphiphysin1, syndapin2, and endophilins are dynamically recruited to fusion sites, and slow the loss of vesicle membrane-bound cargo from fusion sites. A similar effect on vesicle membrane protein dynamics was seen with the over-expression of the GTPases dynamin1 and dynamin2. These results suggest that proteins involved in classical clathrin-mediated endocytosis can regulate exocytosis of synaptic-like microvesicles. Our findings provide insights into the dynamics, assembly, and mechanistic roles of many key factors of exocytosis and endocytosis at single sites of microvesicle fusion in live cells.
Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning
2011-11-14
In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
Bölte, S; Hubl, D; Dierks, T; Holtmann, M; Poustka, F
2008-01-01
Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.
Linking crowding, visual span, and reading.
He, Yingchen; Legge, Gordon E
2017-09-01
The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.
Linking crowding, visual span, and reading
He, Yingchen; Legge, Gordon E.
2017-01-01
The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading. PMID:28973564
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
ERIC Educational Resources Information Center
Chudasama, Yogita; Dalley, Jeffrey W.; Nathwani, Falgyni; Bouger, Pascale; Robbins, Trevor W.
2004-01-01
Two experiments examined the effects of reductions in cortical cholinergic function on performance of a novel task that allowed for the simultaneous assessment of attention to a visual stimulus and memory for that stimulus over a variable delay within the same test session. In the first experiment, infusions of the muscarinic receptor antagonist…
Bringing "Scientific Expeditions" Into the Schools
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)
Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web
NASA Technical Reports Server (NTRS)
Watson, Val; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.
Four types of ensemble coding in data visualizations.
Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven
2016-01-01
Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.
Sentinel lymph node biopsy under fluorescent indocyanin green guidance: Initial experience.
Aydoğan, Fatih; Arıkan, Akif Enes; Aytaç, Erman; Velidedeoğlu, Mehmet; Yılmaz, Mehmet Halit; Sager, Muhammet Sait; Çelik, Varol; Uras, Cihan
2016-01-01
Sentinel lymph node biopsy can be applied by using either blue dye or radionuclide method or both in breast cancer. Fluorescent imaging with indocyanine green is a new defined method. This study evaluates the applicability of sentinel lymph node biopsy via fluorescent indocyanine green. IC-VIEW (Pulsion Medical Systems AG, Munich, Germany) infrared visualization system was used for imaging. Two mL of indocyanine green was injected to visualize sentinel lymph nodes. After injection, subcutaneous lymphatics were traced and sentinel lymph nodes were found with simultaneous imaging. Sentinel lymph nodes were excised under fluorescent light guidance, and excised lymph nodes were examined histopathologically. Patients with sentinel lymph node metastases underwent axillary dissection. Four patients with sentinel lymph node biopsy due to breast cancer were included in the study. Sentinel lymph nodes were visualized with indocyanine green in all patients. The median number of excised sentinel lymph node was 2 (2-3). Two patients with lymph node metastasis underwent axillary dissection. No metastasis was detected in lymph nodes other than the sentinel nodes in patients with axillary dissection. There was no complication during and after the operation related to the method. According to our limited experience, sentinel lymph node biopsy under fluorescent indocyanine green guidance, which has an advantage of simultaneous visualization, is technically feasible.
Attention Increases Spike Count Correlations between Visual Cortical Areas.
Ruff, Douglas A; Cohen, Marlene R
2016-07-13
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.
Attention Increases Spike Count Correlations between Visual Cortical Areas
Cohen, Marlene R.
2016-01-01
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Robotic assisted andrological surgery
Parekattil, Sijo J; Gudeloglu, Ahmet
2013-01-01
The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637
Leivo, Tiina; Sarikkola, Anna-Ulrika; Uusitalo, Risto J; Hellstedt, Timo; Ess, Sirje-Linda; Kivelä, Tero
2011-06-01
To present an economic-analysis comparison of simultaneous and sequential bilateral cataract surgery. Helsinki University Eye Hospital, Helsinki, Finland. Economic analysis. Effects were estimated from data in a study in which patients were randomized to have bilateral cataract surgery on the same day (study group) or sequentially (control group). The main clinical outcomes were corrected distance visual acuity, refraction, complications, Visual Function Index-7 (VF-7) scores, and patient-rated satisfaction with vision. Health-care costs of surgeries and preoperative and postoperative visits were estimated, including the cost of staff, equipment, material, floor space, overhead, and complications. The data were obtained from staff measurements, questionnaires, internal hospital records, and accountancy. Non-health-care costs of travel, home care, and time were estimated based on questionnaires from a random subset of patients. The main economic outcome measures were cost per VF-7 score unit change and cost per patient in simultaneous versus sequential surgery. The study comprised 520 patients (241 patients included non-health-care and time cost analyses). Surgical outcomes and patient satisfaction were similar in both groups. Simultaneous cataract surgery saved 449 Euros (€) per patient in health-care costs and €739 when travel and paid home-care costs were included. The savings added up to €849 per patient when the cost of lost working time was included. Compared with sequential bilateral cataract surgery, simultaneous bilateral cataract surgery provided comparable clinical outcomes with substantial savings in health-care and non-health-care-related costs. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Mental workload while driving: effects on visual search, discrimination, and decision making.
Recarte, Miguel A; Nunes, Luis M
2003-06-01
The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.
Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K
2017-01-01
To assess longitudinal outcomes in a large and varied population of children receiving bilateral cochlear implants both simultaneously and sequentially. This observational non-randomized service evaluation collected localization and speech recognition in noise data from simultaneously and sequentially implanted children at four time points: before bilateral cochlear implants or before the sequential implant, 1 year, 2 years, and 3 years after bilateral implants. No inclusion criteria were applied, so children with additional difficulties, cochleovestibular anomalies, varying educational placements, 23 different home languages, a full range of outcomes and varying device use were included. 1001 children were included: 465 implanted simultaneously and 536 sequentially, representing just over 50% of children receiving bilateral implants in the UK in this period. In simultaneously implanted children the median age at implant was 2.1 years; 7% were implanted at less than 1 year of age. In sequentially implanted children the interval between implants ranged from 0.1 to 14.5 years. Children with simultaneous bilateral implants localized better than those with one implant. On average children receiving a second (sequential) cochlear implant showed improvement in localization and listening in background noise after 1 year of bilateral listening. The interval between sequential implants had no effect on localization improvement although a smaller interval gave more improvement in speech recognition in noise. Children with sequential implants on average were able to use their second device to obtain spatial release from masking after 2 years of bilateral listening. Although ranges were large, bilateral cochlear implants on average offered an improvement in localization and speech perception in noise over unilateral implants. These data represent the diverse population of children with bilateral cochlear implants in the UK from 2010 to 2012. Predictions of outcomes for individual patients are not possible from these data. However, there are no indications to preclude children with long inter-implant interval having the chance of a second cochlear implant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Andrew J., E-mail: berger.156@osu.edu; Page, Michael R.; Bhallamudi, Vidya P.
2015-10-05
Using simultaneous magnetic force microscopy and transport measurements of a graphene spin valve, we correlate the non-local spin signal with the magnetization of the device electrodes. The imaged magnetization states corroborate the influence of each electrode within a one-dimensional spin transport model and provide evidence linking domain wall pinning to additional features in the transport signal.
Covariance Recovery from a Square Root Information Matrix for Data Association
2009-07-02
association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the...association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the...back-substitution as well as efficient access to marginal covariances, which is described next. 2.2. Recovering Marginal Covariances Knowledge of the
Langley, Keith; Anderson, Stephen J
2010-08-06
To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.
NASA Astrophysics Data System (ADS)
Sánchez-Lavega, A.; Chen-Chen, H.; Ordoñez-Etxeberria, I.; Hueso, R.; del Río-Gaztelurrutia, T.; Garro, A.; Cardesín-Moinelo, A.; Titov, D.; Wood, S.
2018-01-01
The Visual Monitoring Camera (VMC) onboard the Mars Express (MEx) spacecraft is a simple camera aimed to monitor the release of the Beagle-2 lander on Mars Express and later used for public outreach. Here, we employ VMC as a scientific instrument to study and characterize high altitude aerosols events (dust and condensates) observed at the Martian limb. More than 21,000 images taken between 2007 and 2016 have been examined to detect and characterize elevated layers of dust in the limb, dust storms and clouds. We report a total of 18 events for which we give their main properties (areographic location, maximum altitude, limb projected size, Martian solar longitude and local time of occurrence). The top altitudes of these phenomena ranged from 40 to 85 km and their horizontal extent at the limb ranged from 120 to 2000 km. They mostly occurred at Equatorial and Tropical latitudes (between ∼30°N and 30°S) at morning and afternoon local times in the southern fall and northern winter seasons. None of them are related to the orographic clouds that typically form around volcanoes. Three of these events have been studied in detail using simultaneous images taken by the MARCI instrument onboard Mars Reconnaissance Orbiter (MRO) and studying the properties of the atmosphere using the predictions from the Mars Climate Database (MCD) General Circulation Model. This has allowed us to determine the three-dimensional structure and nature of these events, with one of them being a regional dust storm and the two others water ice clouds. Analyses based on MCD and/or MARCI images for the other cases studied indicate that the rest of the events correspond most probably to water ice clouds.
Rapid dynamic R1 /R2 */temperature assessment: a method with potential for monitoring drug delivery.
Lorenzato, Cyril; Oerlemans, Chris; Cernicanu, Alexandru; Ries, Mario; Denis de Senneville, Baudouin; Moonen, Chrit; Bos, Clemens
2014-11-01
Local drug delivery by hyperthermia-induced drug release from thermosensitive liposomes (TSLs) may reduce the systemic toxicity of chemotherapy, whilst maintaining or increasing its efficacy. Relaxivity contrast agents can be co-encapsulated with the drug to allow the visualization of the presence of liposomes, by means of R2 *, as well as the co-release of the contrast agent and the drug, by means of R1, on heating. Here, the mathematical method used to extract both R2 * and R1 from a fast dynamic multi-echo spoiled gradient echo (ME-SPGR) is presented and analyzed. Finally, this method is used to monitor such release events. R2 * was obtained from a fit to the ME-SPGR data. Absolute R1 was calculated from the signal magnitude changes corrected for the apparent proton density changes and a baseline Look-Locker R1 map. The method was used to monitor nearly homogeneous water bath heating and local focused ultrasound heating of muscle tissue, and to visualize the release of a gadolinium chelate from TSLs in vitro. R2 *, R1 and temperature maps were measured with a 5-s temporal resolution. Both R2 *and R1 measured were found to change with temperature. The dynamic R1 measurements after heating agreed with the Look-Locker R1 values if changes in equilibrium magnetization with temperature were considered. Release of gadolinium from TSLs was detected by an R1 increase near the phase transition temperature, as well as a shallow R2 * increase. Simultaneous temperature, R2 * and R1 mapping is feasible in real time and has the potential for use in image-guided drug delivery studies. Copyright © 2014 John Wiley & Sons, Ltd.
Functional MRI during Hippocampal Deep Brain Stimulation in the Healthy Rat Brain.
Van Den Berge, Nathalie; Vanhove, Christian; Descamps, Benedicte; Dauwe, Ine; van Mierlo, Pieter; Vonck, Kristl; Keereman, Vincent; Raedt, Robrecht; Boon, Paul; Van Holen, Roel
2015-01-01
Deep Brain Stimulation (DBS) is a promising treatment for neurological and psychiatric disorders. The mechanism of action and the effects of electrical fields administered to the brain by means of an electrode remain to be elucidated. The effects of DBS have been investigated primarily by electrophysiological and neurochemical studies, which lack the ability to investigate DBS-related responses on a whole-brain scale. Visualization of whole-brain effects of DBS requires functional imaging techniques such as functional Magnetic Resonance Imaging (fMRI), which reflects changes in blood oxygen level dependent (BOLD) responses throughout the entire brain volume. In order to visualize BOLD responses induced by DBS, we have developed an MRI-compatible electrode and an acquisition protocol to perform DBS during BOLD fMRI. In this study, we investigate whether DBS during fMRI is valuable to study local and whole-brain effects of hippocampal DBS and to investigate the changes induced by different stimulation intensities. Seven rats were stereotactically implanted with a custom-made MRI-compatible DBS-electrode in the right hippocampus. High frequency Poisson distributed stimulation was applied using a block-design paradigm. Data were processed by means of Independent Component Analysis. Clusters were considered significant when p-values were <0.05 after correction for multiple comparisons. Our data indicate that real-time hippocampal DBS evokes a bilateral BOLD response in hippocampal and other mesolimbic structures, depending on the applied stimulation intensity. We conclude that simultaneous DBS and fMRI can be used to detect local and whole-brain responses to circuit activation with different stimulation intensities, making this technique potentially powerful for exploration of cerebral changes in response to DBS for both preclinical and clinical DBS.
Functional MRI during Hippocampal Deep Brain Stimulation in the Healthy Rat Brain
Van Den Berge, Nathalie; Vanhove, Christian; Descamps, Benedicte; Dauwe, Ine; van Mierlo, Pieter; Vonck, Kristl; Keereman, Vincent; Raedt, Robrecht; Boon, Paul; Van Holen, Roel
2015-01-01
Deep Brain Stimulation (DBS) is a promising treatment for neurological and psychiatric disorders. The mechanism of action and the effects of electrical fields administered to the brain by means of an electrode remain to be elucidated. The effects of DBS have been investigated primarily by electrophysiological and neurochemical studies, which lack the ability to investigate DBS-related responses on a whole-brain scale. Visualization of whole-brain effects of DBS requires functional imaging techniques such as functional Magnetic Resonance Imaging (fMRI), which reflects changes in blood oxygen level dependent (BOLD) responses throughout the entire brain volume. In order to visualize BOLD responses induced by DBS, we have developed an MRI-compatible electrode and an acquisition protocol to perform DBS during BOLD fMRI. In this study, we investigate whether DBS during fMRI is valuable to study local and whole-brain effects of hippocampal DBS and to investigate the changes induced by different stimulation intensities. Seven rats were stereotactically implanted with a custom-made MRI-compatible DBS-electrode in the right hippocampus. High frequency Poisson distributed stimulation was applied using a block-design paradigm. Data were processed by means of Independent Component Analysis. Clusters were considered significant when p-values were <0.05 after correction for multiple comparisons. Our data indicate that real-time hippocampal DBS evokes a bilateral BOLD response in hippocampal and other mesolimbic structures, depending on the applied stimulation intensity. We conclude that simultaneous DBS and fMRI can be used to detect local and whole-brain responses to circuit activation with different stimulation intensities, making this technique potentially powerful for exploration of cerebral changes in response to DBS for both preclinical and clinical DBS. PMID:26193653
Immunocytochemistry and neurobiology.
Cuello, A C; Priestley, J V; Sofroniew, M V
1983-10-01
Immunocytochemistry enables the localization of transmitter-related antigens in tissue sections at either the light microscopic or the electron microscopic level. In the case of neuropeptides and certain transmitters (e.g. serotonin) it has been possible to produce antibodies directed against the putative transmitter itself. In other cases it has not been possible to produce useful antibodies against transmitters but antibodies have been raised against enzymes involved in transmitter metabolism (e.g. tyrosine hydroxylase, glutamic acid decarboxylase) and these antibodies are suitable markers for transmitter systems. Successful immunostaining with an antibody depends on a number of factors, two of the most important being the fixation of the antigen in the tissue and the visualization of the primary antibody once it has bound to the antigen. Techniques available for the visualization of bound primary antibody include the indirect-labelled immunofluorescence procedure and the unlabelled peroxidase-antiperoxidase (PAP) procedure. Direct-labelled immunocytochemistry is not now widely used but is likely to become increasingly important with the introduction of monoclonal antibodies and the development of techniques for the simultaneous localization of multiple antigens. Monoclonal antibody procedures also allow the production of antibodies against antigens which are difficult to purify such as certain transmitter markers (e.g. choline acetyltransferase) and constituents of neuronal membranes. Immunocytochemistry allows the production of detailed maps of the distribution of putative transmitters in the nervous system and in combination with tract tracing procedures is being used increasingly to identify transmitters in neuronal circuits. It has also been important in establishing the transmitter status of various neuroactive compounds in single neurones. Immunocytochemistry can be carried out on post-mortem samples and is providing information on transmitter distribution in normal and abnormal human brain.
Higher levels of depression are associated with reduced global bias in visual processing.
de Fockert, Jan W; Cooper, Andrew
2014-04-01
Negative moods have been associated with a tendency to prioritise local details in visual processing. The current study investigated the relation between depression and visual processing using the Navon task, a standard task of local and global processing. In the Navon task, global stimuli are presented that are made up of many local parts, and the participants are instructed to report the identity of either a global or a local target shape. Participants with a low self-reported level of depression showed evidence of the expected global processing bias, and were significantly faster at responding to the global, compared with the local level. By contrast, no such difference was observed in participants with high levels of depression. The reduction of the global bias associated with high levels of depression was only observed in the overall speed of responses to global (versus local) targets, and not in the level of interference produced by the global (versus local) distractors. These results are in line with recent findings of a dissociation between local/global processing bias and interference from local/global distractors, and support the claim that depression is associated with a reduction in the tendency to prioritise global-level processing.
Violation of local realism with freedom of choice
Scheidl, Thomas; Ursin, Rupert; Kofler, Johannes; Ramelow, Sven; Ma, Xiao-Song; Herbst, Thomas; Ratschbacher, Lothar; Fedrizzi, Alessandro; Langford, Nathan K.; Jennewein, Thomas; Zeilinger, Anton
2010-01-01
Bell’s theorem shows that local realistic theories place strong restrictions on observable correlations between different systems, giving rise to Bell’s inequality which can be violated in experiments using entangled quantum states. Bell’s theorem is based on the assumptions of realism, locality, and the freedom to choose between measurement settings. In experimental tests, “loopholes” arise which allow observed violations to still be explained by local realistic theories. Violating Bell’s inequality while simultaneously closing all such loopholes is one of the most significant still open challenges in fundamental physics today. In this paper, we present an experiment that violates Bell’s inequality while simultaneously closing the locality loophole and addressing the freedom-of-choice loophole, also closing the latter within a reasonable set of assumptions. We also explain that the locality and freedom-of-choice loopholes can be closed only within nondeterminism, i.e., in the context of stochastic local realism. PMID:21041665
Violation of local realism with freedom of choice.
Scheidl, Thomas; Ursin, Rupert; Kofler, Johannes; Ramelow, Sven; Ma, Xiao-Song; Herbst, Thomas; Ratschbacher, Lothar; Fedrizzi, Alessandro; Langford, Nathan K; Jennewein, Thomas; Zeilinger, Anton
2010-11-16
Bell's theorem shows that local realistic theories place strong restrictions on observable correlations between different systems, giving rise to Bell's inequality which can be violated in experiments using entangled quantum states. Bell's theorem is based on the assumptions of realism, locality, and the freedom to choose between measurement settings. In experimental tests, "loopholes" arise which allow observed violations to still be explained by local realistic theories. Violating Bell's inequality while simultaneously closing all such loopholes is one of the most significant still open challenges in fundamental physics today. In this paper, we present an experiment that violates Bell's inequality while simultaneously closing the locality loophole and addressing the freedom-of-choice loophole, also closing the latter within a reasonable set of assumptions. We also explain that the locality and freedom-of-choice loopholes can be closed only within nondeterminism, i.e., in the context of stochastic local realism.
Epicenters of dynamic connectivity in the adaptation of the ventral visual system.
Prčkovska, Vesna; Huijbers, Willem; Schultz, Aaron; Ortiz-Teran, Laura; Peña-Gomez, Cleofe; Villoslada, Pablo; Johnson, Keith; Sperling, Reisa; Sepulcre, Jorge
2017-04-01
Neuronal responses adapt to familiar and repeated sensory stimuli. Enhanced synchrony across wide brain systems has been postulated as a potential mechanism for this adaptation phenomenon. Here, we used recently developed graph theory methods to investigate hidden connectivity features of dynamic synchrony changes during a visual repetition paradigm. Particularly, we focused on strength connectivity changes occurring at local and distant brain neighborhoods. We found that connectivity reorganization in visual modal cortex-such as local suppressed connectivity in primary visual areas and distant suppressed connectivity in fusiform areas-is accompanied by enhanced local and distant connectivity in higher cognitive processing areas in multimodal and association cortex. Moreover, we found a shift of the dynamic functional connections from primary-visual-fusiform to primary-multimodal/association cortex. These findings suggest that repetition-suppression is made possible by reorganization of functional connectivity that enables communication between low- and high-order areas. Hum Brain Mapp 38:1965-1976, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2012-05-04
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Marschark, Marc; Pelz, Jeff B.; Convertino, Carol; Sapere, Patricia; Arndt, Mary Ellen; Seewagen, Rosemarie
2006-01-01
This study examined visual information processing and learning in classrooms including both deaf and hearing students. Of particular interest were the effects on deaf students’ learning of live (three-dimensional) versus video-recorded (two-dimensional) sign language interpreting and the visual attention strategies of more and less experienced deaf signers exposed to simultaneous, multiple sources of visual information. Results from three experiments consistently indicated no differences in learning between three-dimensional and two-dimensional presentations among hearing or deaf students. Analyses of students’ allocation of visual attention and the influence of various demographic and experimental variables suggested considerable flexibility in deaf students’ receptive communication skills. Nevertheless, the findings also revealed a robust advantage in learning in favor of hearing students. PMID:16628250
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1981-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Hiding the Disk and Network Latency of Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David
2001-01-01
This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.
CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.
Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J
2015-01-01
CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Opposing effects of attention and consciousness on afterimages
van Boxtel, Jeroen J. A.; Tsuchiya, Naotsugu; Koch, Christof
2010-01-01
The brain's ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether they are distinct. So far, no experiment has simultaneously manipulated both. We carried out a full factorial 2 × 2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible. We show that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, whereas consciously seeing the grating increases the afterimage duration. These findings provide clear evidence for distinctive influences of selective attention and consciousness on visual perception. PMID:20424112
Hybrid label-free multiphoton and optoacoustic microscopy (MPOM)
NASA Astrophysics Data System (ADS)
Soliman, Dominik; Tserevelakis, George J.; Omar, Murad; Ntziachristos, Vasilis
2015-07-01
Many biological applications require a simultaneous observation of different anatomical features. However, unless potentially harmful staining of the specimens is employed, individual microscopy techniques do generally not provide multi-contrast capabilities. We present a hybrid microscope integrating optoacoustic microscopy and multiphoton microscopy, including second-harmonic generation, into a single device. This combined multiphoton and optoacoustic microscope (MPOM) offers visualization of a broad range of structures by employing different contrast mechanisms and at the same time enables pure label-free imaging of biological systems. We investigate the relative performance of the two microscopy modalities and demonstrate their multi-contrast abilities through the label-free imaging of a zebrafish larva ex vivo, simultaneously visualizing muscles and pigments. This hybrid microscopy application bears great potential for developmental biology studies, enabling more comprehensive information to be obtained from biological specimens without the necessity of staining.
Simultaneous Bilateral Cataract Surgery in Outreach Surgical Camps
Giles, Kagmeni; Robert, Ebana Steve; Come, Ebana Mvogo; Wiedemann, Peter
2017-01-01
OBJECTIVES The aim of this study was to evaluate the safety and visual outcomes of simultaneous bilateral cataract surgery (SBCS) with intraocular lens implantation performed in outreach surgical eye camps. METHODS The medical records of 47 consecutive patients who underwent simultaneous bilateral small-incision cataract surgery between January 2010 and December 2015 in outreach surgical camps in rural Cameroon were reviewed. The measures included postoperative visual outcomes and intraoperative and postoperative complications. RESULTS Data from 94 eyes of 47 participants (30 men, 17 women; mean age: 60.93 ± 13.58 years, range: 45–80 years) were included in this study. The presented best visual acuity (VA) was less than 3/60 in 100% of the eyes. At the 4-week follow-up, 84.04% of the eyes showed increased VA of 1 line or more (P = .001). Of these, 71 (75.53%) achieved good VA (greater than 6/18). Intraoperative or postoperative complications occurred in 19 (20.21%) eyes. The most serious intraoperative complication was a posterior capsule rupture and vitreous loss (2 patients, 2 eyes). The postoperative complications included a transient elevation in the intraocular pressure (6 eyes), chronic corneal oedema (5 eyes), iris capture (3 eyes), lens decentration (2 eyes), and hyphema (1 eye). No cases of postoperative endophthalmitis were recorded. CONCLUSIONS Under the strict observation of endophthalmitis prophylaxis, SBCS is an option to reduce the cataract blindness backlog in rural areas of developing countries. PMID:28469481
Is that disgust I see? Political ideology and biased visual attention.
Oosterhoff, Benjamin; Shook, Natalie J; Ford, Cameron
2018-01-15
Considerable evidence suggests that political liberals and conservatives vary in the way they process and respond to valenced (i.e., negative versus positive) information, with conservatives generally displaying greater negativity biases than liberals. Less is known about whether liberals and conservatives differentially prioritize certain forms of negative information over others. Across two studies using eye-tracking methodology, we examined differences in visual attention to negative scenes and facial expressions based on self-reported political ideology. In Study 1, scenes rated high in fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with less attentional engagement (i.e., lower dwell time) of disgust scenes and more attentional engagement toward neutral scenes. Socially conservative political attitudes were not significantly associated with visual attention to fear or sad scenes. In Study 2, images depicting facial expressions of fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with greater attentional engagement with facial expressions depicting disgust and less attentional engagement toward neutral faces. Visual attention to fearful or sad faces was not related to social conservatism. Endorsement of economically conservative political attitudes was not consistently associated with biases in visual attention across both studies. These findings support disease-avoidance models and suggest that social conservatism may be rooted within a greater sensitivity to disgust-related information. Copyright © 2017 Elsevier B.V. All rights reserved.
Nebula: reconstruction and visualization of scattering data in reciprocal space.
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H
2015-04-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.
Nebula: reconstruction and visualization of scattering data in reciprocal space
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.
2015-01-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute timescales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083
Eiber, Calvin D; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2014-01-01
We present a computational model of the optic pathway which has been adapted to simulate cortical responses to visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional visual neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher
2015-01-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106
Electromagnetic Evidence of Altered Visual Processing in Autism
ERIC Educational Resources Information Center
Neumann, Nicola; Dubischar-Krivec, Anna M.; Poustka, Fritz; Birbaumer, Niels; Bolte, Sven; Braun, Christoph
2011-01-01
Individuals with autism spectrum disorder (ASD) demonstrate intact or superior local processing of visual-spatial tasks. We investigated the hypothesis that in a disembedding task, autistic individuals exhibit a more local processing style than controls, which is reflected by altered electromagnetic brain activity in response to embedded stimuli…
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
... originating local programming as defined by § 74.701(h) operating over 0.001 kw peak visual power (0.002 kw... visual presentation or a clearly understandable aural presentation of the translator station's call... identification procedures given in § 73.1201 when locally originating programming, as defined by § 74.701(h). The...
Visual acuity in adults with Asperger's syndrome: no evidence for "eagle-eyed" vision.
Falkmer, Marita; Stuart, Geoffrey W; Danielsson, Henrik; Bram, Staffan; Lönebrink, Mikael; Falkmer, Torbjörn
2011-11-01
Autism spectrum conditions (ASC) are defined by criteria comprising impairments in social interaction and communication. Altered visual perception is one possible and often discussed cause of difficulties in social interaction and social communication. Recently, Ashwin et al. suggested that enhanced ability in local visual processing in ASC was due to superior visual acuity, but that study has been the subject of methodological criticism, placing the findings in doubt. The present study investigated visual acuity thresholds in 24 adults with Asperger's syndrome and compared their results with 25 control subjects with the 2 Meter 2000 Series Revised ETDRS Chart. The distribution of visual acuities within the two groups was highly similar, and none of the participants had superior visual acuity. Superior visual acuity in individuals with Asperger's syndrome could not be established, suggesting that differences in visual perception in ASC are not explained by this factor. A continued search for explanations of superior ability in local visual processing in persons with ASC is therefore warranted. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Super-Resolution Imaging of Molecular Emission Spectra and Single Molecule Spectral Fluctuations
Mlodzianoski, Michael J.; Curthoys, Nikki M.; Gunewardene, Mudalige S.; Carter, Sean; Hess, Samuel T.
2016-01-01
Localization microscopy can image nanoscale cellular details. To address biological questions, the ability to distinguish multiple molecular species simultaneously is invaluable. Here, we present a new version of fluorescence photoactivation localization microscopy (FPALM) which detects the emission spectrum of each localized molecule, and can quantify changes in emission spectrum of individual molecules over time. This information can allow for a dramatic increase in the number of different species simultaneously imaged in a sample, and can create super-resolution maps showing how single molecule emission spectra vary with position and time in a sample. PMID:27002724
Tremblay, Emmanuel; Vannasing, Phetsamone; Roy, Marie-Sylvie; Lefebvre, Francine; Kombate, Damelan; Lassonde, Maryse; Lepore, Franco; McKerral, Michelle; Gallagher, Anne
2014-01-01
In the past decades, multiple studies have been interested in developmental patterns of the visual system in healthy infants. During the first year of life, differential maturational changes have been observed between the Magnocellular (P) and the Parvocellular (P) visual pathways. However, few studies investigated P and M system development in infants born prematurely. The aim of the present study was to characterize P and M system maturational differences between healthy preterm and fullterm infants through a critical period of visual maturation: the first year of life. Using a cross-sectional design, high-density electroencephalogram (EEG) was recorded in 31 healthy preterms and 41 fullterm infants of 3, 6, or 12 months (corrected age for premature babies). Three visual stimulations varying in contrast and spatial frequency were presented to stimulate preferentially the M pathway, the P pathway, or both systems simultaneously during EEG recordings. Results from early visual evoked potentials in response to the stimulation that activates simultaneously both systems revealed longer N1 latencies and smaller P1 amplitudes in preterm infants compared to fullterms. Moreover, preterms showed longer N1 and P1 latencies in response to stimuli assessing the M pathway at 3 months. No differences between preterms and fullterms were found when using the preferential P system stimulation. In order to identify the cerebral generator of each visual response, distributed source analyses were computed in 12-month-old infants using LORETA. Source analysis demonstrated an activation of the parietal dorsal region in fullterm infants, in response to the preferential M pathway, which was not seen in the preterms. Overall, these findings suggest that the Magnocellular pathway development is affected in premature infants. Although our VEP results suggest that premature children overcome, at least partially, the visual developmental delay with time, source analyses reveal abnormal brain activation of the Magnocellular pathway at 12 months of age. PMID:25268226
Nishikawa, Mari; Suzuki, Mariko; Sprague, David S
2014-07-01
Understanding cohesion among individuals within a group is necessary to reveal the social system of group-living primates. Japanese macaques (Macaca fuscata) are female-philopatric primates that reside in social groups. We investigated whether individual activity and social factors can affect spatio-temporal cohesion in wild female Japanese macaques. We conducted behavioral observation on a group, which contained 38 individuals and ranged over ca. 60 ha during the study period. Two observers carried out simultaneous focal-animal sampling of adult female pairs during full-day follows using global positioning system which enabled us to quantify interindividual distances (IIDs), group members within visual range (i.e., visual unit), and separation duration beyond visual range as indicators of cohesion among individuals. We found considerable variation in spatio-temporal group cohesion. The overall mean IID was 99.9 m (range = 0-618.2 m). The percentage of IIDs within visual range was 23.1%, within auditory range was 59.8%, and beyond auditory range was 17.1%. IIDs varied with activity; they were shorter during grooming and resting, and longer during foraging and traveling. Low-ranking females showed less cohesion than high-ranking ones. Kin females stayed nearly always within audible range. The macaques were weakly cohesive with small mean visual unit size (3.15 counting only adults, 5.99 counting all individuals). Both-sex units were the most frequently observed visual unit type when they were grooming/resting. Conversely, female units were the most frequently observed visual unit type when they were foraging. The overall mean visual separation duration was 25.7 min (range = 3-513 min). Separation duration was associated with dominance rank. These results suggest that Japanese macaques regulate cohesion among individuals depending on their activity and on social relationships; they were separated to adapt food distribution and aggregated to maintain social interactions. © 2014 Wiley Periodicals, Inc.
Simultaneous Bilateral Anterior and Posterior Lenticonus in Alport Syndrome.
Bamotra, Ravi Kant; Meenakshi; Kesarwani, Prem Chandra; Qayum, Shazia
2017-08-01
Alport syndrome is an inherited disease characterized by progressive renal failure, hearing loss, and ocular abnormalities like anterior lenticonus, corneal opacities, cataract, central perimacular and peripheral coalescing fleck retinopathies, and temporal retinal thinning. Although anterior lenticonus is common in Alport syndrome, simultaneous anterior and posterior lenticonus is a rare presentation. We report a case of a 22-year-old female with simultaneous anterior and posterior lenticonus presentation in which ocular examination lead to the detection of Alport syndrome. The patient had sensorineural deafness as well as microscopic haematuria. Clear lens extraction was performed in both eyes to eliminate lenticular irregular astigmatism for visual rehabilitation.
Multi-modal information processing for visual workload relief
NASA Technical Reports Server (NTRS)
Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.
1980-01-01
The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.